How to evaluate helmv3 printf output as jinja expression - helm3

Based on number of nodes, I need to refer the certificate and key from yaml file.
{{- $root := . -}}
{{ range $k, $v := until (int ($root.Values.data.nodeCount) ) }}
apiVersion: v1
kind: Secret
metadata:
name: {{ $root.Values.clusterName }}-nodes-{{$k}}-secret
type: Opaque
data:
crt: {{ printf "$root.Values.data.node_%d_key" $k }}
---
{{- end }}
Example output: It doesn't show the output value it only shows the printf output which is string, how can I evaluate the printf output to get the evaultation to retrieve result from values.yaml
---
apiVersion: v1
kind: Secret
metadata:
name: test-nodes-0-secret
type: Opaque
data:
crt: $root.Values.data.node_0_key
---
apiVersion: v1
kind: Secret
metadata:
name: test-nodes-1-secret
type: Opaque
data:
crt: $root.Values.data.node_1_key
example values.yml
nodeCount: 3
clusterName: "test"
data:
nodeCount: 2
node_0_crt: "test"
node_0_key: "test"
node_1_crt: "test1"
node_1_key: "test1"

Note that Helm doesn't use Jinja templating syntax. If you were to try using a Jinja expression in a Helm template it wouldn't work. Helm uses Go templates (with a bunch of custom functions).
For the behavior you want, I think you're looking for the tpl function, which lets you evaluate a string as a Helm template. That might look like this:
{{ range until (int $.Values.data.nodeCount) }}
apiVersion: v1
kind: Secret
metadata:
name: {{ $.Values.clusterName }}-nodes-{{.}}-secret
type: Opaque
data:
crt: {{ tpl (printf "{{ .Values.data.node_%d_key }}" .) $ }}
---
{{- end }}
Note that I've also removed your use of $root; you can just refer to the $ variable if you need to explicitly refer to the root context. I've also slightly simplified the outer range loop.
Given the above template and your sample data, I get the following output:
---
# Source: example/templates/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: test-nodes-0-secret
type: Opaque
data:
crt: test
---
# Source: example/templates/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: test-nodes-1-secret
type: Opaque
data:
crt: test1

Related

is there a way in argocd that i can take values.yaml dymically in application sets?

is there a way in argocd that i can take values.yaml dymically on namespace in application sets for example
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: xxxx-application-set
namespace: argocd
spec:
generators:
- list:
elements:
- namespace: namespace1
- namespace: namespace2
- namespace: namespace3
template:
metadata:
name: '{{namespace}}-test'
spec:
project: default
source:
repoURL: XXXX.git
targetRevision: HEAD
path: xxxx
helm:
valueFiles:
- 'values-{{namespace}}'.yaml
cusomtize the values for each namespace as required
It looks good actually what you did there but I would try with
valueFiles:
- "values-{{namespace}}.yaml"
I guess this should work.
Or you can try this:
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: xxxx-application-set
namespace: argocd
spec:
generators:
- list:
elements:
- namespace: namespace1
valuesfile: namespace1_values.yaml
- namespace: namespace2
valuesfile: namespace2_values.yaml
- namespace: namespace3
valuesfile: namespace3_values.yaml
template:
metadata:
name: '{{namespace}}-test'
spec:
project: default
source:
repoURL: XXXX.git
targetRevision: HEAD
path: xxxx
helm:
valueFiles:
- $valuesfile #or "{{valuesfile}}
As you can see in this page you can use build environment variables for the Helm values file path.
Helm Docu

oc/kubectl patch replaces whole line

I am using oc patch with op to replace one string in deployment, following is the command:-
oc patch dc abc --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value": "ab-repository/" },{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value": "bc-repository/" }]'
what it is doing is it changes below:-
Before:- ab-repository/ab:1.0.0
After:- bc-repository/
what I want is this:-
Before:- ab-repository/ab:1.0.0
After:- bc-repository/ab:1.0.0
Please let me know what i am doing wrong here.
Below is the YAML
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
name: ruleengine
namespace: apps
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
name: ruleengine
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
creationTimestamp: null
labels:
name: ruleengine
spec:
containers:
- image: ab-repository/ab:1.0.0 ### containers should be provided in the form of an array
The 'replace' operation works like remove/add entire value:
This operation is functionally identical to a "remove" operation for
a value, followed immediately by an "add" operation at the same
location with the replacement value.
There's no such JSON patch operation as replace value partially (RFC6902, RFC7386)
You can get image like:
oc get dc ruleengine -o=jsonpath='{..image}'
Then manipulate the value with sed and use it in 'oc patch'

How to deploy OPA(open policy agent) adapter on minikube cluster (local)

https://github.com/istio/istio/tree/master/mixer/adapter/opa
I have deployed bookinfo sample application and want to do policy enforcement through OPA.
and also set this configuration-
apiVersion: "config.istio.io/v1alpha2"
kind: handler
metadata:
name: opa
namespace: istio-system
spec:
compiledAdapter: opa
params:
policy:
- |+
package mixerauthz
default allow = false
checkMethod: "data.mixerauthz.allow"
failClose: true
---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: authorization
namespace: istio-system
spec:
actions:
- handler: opa.istio-system
instances:
- authzinstance.authorization
selector: "true"
---
apiVersion: "config.istio.io/v1alpha2"
kind: instance
metadata:
name: authzinstance
namespace: istio-system
spec:
template: authorization
params:
subject:
user: source.uid | ""
action:
namespace: destination.namespace | "default"
service: destination.service | ""
method: request.method | ""
path: request.path | ""
EOF
Here, In Handler the allow is false so it should show anything to User but it doen not affect anything.(means no enforcement) Still application is showing everything just like before.

Caching in Openshift Binary Build doesn't work

Accorindg to documentation, Openshift binary builds support caching of docker layers.
https://docs.openshift.com/enterprise/3.1/dev_guide/builds.html#no-cache
Using Openshift 3.11
This is sample buildconfig that does not cache docker layers between builds. I've set explicitly noCache to false to avoid any confusion. Did not help.
apiVersion: v1
kind: Template
metadata:
name: build-template-binary
labels:
template: build-template-binary
objects:
- apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewBuild
labels:
build: ${NAME}
name: ${NAME}
spec:
failedBuildsHistoryLimit: 50
output:
to:
kind: ImageStreamTag
name: ${IMAGE_STREAM_NAME}:latest
runPolicy: Serial
source:
type: Binary
strategy:
dockerStrategy:
noCache: false
successfulBuildsHistoryLimit: 20
parameters:
- name: NAME
requied: true
- name: IMAGE_STREAM_NAME
required: true
Every time I run
oc start-build my-build-name --from-dir=. --follow
Every single step in my dockerfile gets executed. No caching occurs.

How to define Kubernetes Job using a private docker registry?

I have a simple Kubernetes job (based on the http://kubernetes.io/docs/user-guide/jobs/work-queue-2/ example) which uses a Docker image that I have placed as a public image on my dockerhub account. It all loks like this:
job.yaml:
apiVersion: batch/v1
kind: Job
metadata:
name: job-wq-2
spec:
parallelism: 2
template:
metadata:
name: job-wq-2
spec:
containers:
- name: c
image: jonalv/job-wq-2
restartPolicy: OnFailure
Now I want to try to instead use a private Docker registry which requires authentication as in:
docker login https://myregistry.com
But I can't find anything about how I add username and password to my job.yaml file. How is it done?
You need to use ImagePullSecrets.
Once you create a secret object, you can refer to it in your pod spec (the spec value that is the parent of containers:
apiVersion: batch/v1
kind: Job
metadata:
name: job-wq-2
spec:
parallelism: 2
template:
metadata:
name: job-wq-2
spec:
imagePullSecrets:
- name: myregistrykey
containers:
- name: c
image: jonalv/job-wq-2
restartPolicy: OnFailure
Ofcourse, you'll have to create the secret (as per the docs). This is what this will look like:
apiVersion: v1
kind: Secret
metadata:
name: myregistrykey
namespace: mynamespace
data:
.dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg==
type: kubernetes.io/dockerconfigjson
The value of .dockerconfigjson is a base64 encoding of this file: .docker/config.json.
The key point: A job spec contains a pod spec. So whatever knowledge you gain about pod specs can be applied to jobs as well.

Resources