argocd pass dynamic variables to a helm release - devops

I have a set of applications I would like to deploy on several eks clusters like Prometheus, Grafana and others.
I have this setup inside 1 git repo that has an app of apps that each cluster could reference to.
My issue is having small changes in the value for these deployments, lets say for the Grafana deployment I want a unique url per cluster:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: grafana
namespace: argocd
spec:
project: default
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- PrunePropagationPolicy=foreground
- CreateNamespace=true
retry:
limit: 2
backoff:
duration: 5s
maxDuration: 3m0s
factor: 2
destination:
server: "https://kubernetes.default.svc"
namespace:
source:
repoURL:
targetRevision:
chart:
helm:
releaseName: grafana
values: |
...
...
hostname/url: {cluster_name}.grafana.... <-----
...
...
so far the only way i see doing this is by having multiple values files, is there a way to make it read values from config maps or maybe pass down a variable through the app of apps to make this work?
any help is appreciated

I'm afraid there is no (yet) good generic solution for templating values.yaml for the Helm charts in the ArgoCD.
Still, for your exact case, ArgoCD already has all you need.
Your "I have a set of applications" should naturally bring you to the ApplicationSet Controller and its features.
For iteration over the set of clusters, I'd recommend you to look at ApplicationSet Generators and in particular on Cluster Generator. Then your example would look something like:
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: 'grafana'
namespace: 'argocd'
finalizers:
- 'resources-finalizer.argocd.argoproj.io'
spec:
generators:
- clusters: # select only "remote" clusters
selector:
matchLabels:
'argocd.argoproj.io/secret-type': 'cluster'
template:
metadata:
name: 'grafana-{{ name }}'
spec:
project: 'default'
destination:
server: '{{ server }}'
namespace: 'grafana'
source:
path:
repoURL:
targetRevision:
releaseName: grafana
helm:
values: |
...
...
hostname/url: {{ name }}.grafana.... <-----
...
...
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- PrunePropagationPolicy=foreground
- CreateNamespace=true
retry:
limit: 2
backoff:
duration: 5s
maxDuration: 3m0s
factor: 2
Also check full Application definition for examples how to override particular parameters through:
...
helm:
# Extra parameters to set (same as setting through values.yaml, but these take precedence)
parameters:
- name: "nginx-ingress.controller.service.annotations.external-dns\\.alpha\\.kubernetes\\.io/hostname"
value: mydomain.example.com
- name: "ingress.annotations.kubernetes\\.io/tls-acme"
value: "true"
forceString: true # ensures that value is treated as a string
# Use the contents of files as parameters (uses Helm's --set-file)
fileParameters:
- name: config
path: files/config.json
As well as a combination of the inline values with valueFiles: for common options.

Related

how to run a job in each node of kubernetes instead of daemonset

There is a kubernetes cluster with 100 nodes, I have to clean the specific images manually, I know the kubelet garbage collect may help, but it isn't applied in my case.
After browsing the internet , I found a solution - docker in docker, to solve my problem.
I just wanna remove the image in each node one time, is there any way to run a job in each node one time?
I checked the kubernetes labels and podaffinity, but still no ideas, any body could help?
Also, I tried to use daemonset to solve the problem, but turns out that it can only remove the image for a part of nodes instead of all nodes, I don't what might be the problem...
here is the daemonset example:
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: test-ds
labels:
k8s-app: test
spec:
selector:
matchLabels:
k8s-app: test
template:
metadata:
labels:
k8s-app: test
spec:
containers:
- name: test
env:
- name: DELETE_IMAGE_NAME
value: "nginx"
image: busybox
command: ['sh', '-c', 'curl --unix-socket /var/run/docker.sock -X DELETE http://localhost/v1.39/images/$(DELETE_IMAGE_NAME)']
securityContext:
privileged: true
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-sock-volume
ports:
- containerPort: 80
volumes:
- name: docker-sock-volume
hostPath:
# location on host
path: /var/run/docker.sock
If you want to run you job on single specific Node you can us the Nodeselector in POD spec
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: test
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: test
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
nodeSelector:
name: node3
daemon set ideally should resolve your issues, as it creates the PODs on each available Node in the cluster.
You can read more about the affinity at here : https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/
nodeSelector provides a very simple way to constrain pods to nodes
with particular labels. The affinity/anti-affinity feature, greatly
expands the types of constraints you can express. The key enhancements
are
The affinity/anti-affinity language is more expressive. The language
offers more matching rules besides exact matches created with a
logical AND operation;
You can use the Affinity in Job YAML something like
apiVersion: v1
kind: Pod
metadata:
name: with-node-affinity
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/e2e-az-name
operator: In
values:
- e2e-az1
- e2e-az2
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
containers:
- name: with-node-affinity
image: k8s.gcr.io/pause:2.0
Update
Now if you have issue with the Deamon affinity with the Job is also useless, as Job will create the Single POD which will get schedule to Single node as per affinity. Either create 100 job with different affinity rules or you use Deployment + Affinity to schedule the Replicas on different nodes.
We will create one Deployment with POD affinity and make sure, multiple PODs of a single deployment won't get scheduled on one Node.
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
app: test
spec:
replicas: 100
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: <Image>
ports:
- containerPort: 80
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- test
topologyKey: "kubernetes.io/hostname"
Try using this deployment template and replace your image here. You can reduce replicas first to 10 instead of 100 to check it's spreading PODs or not.
Read more at : https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#an-example-of-a-pod-that-uses-pod-affinity
Extra :
You can also write and use your custom CRD : https://github.com/darkowlzz/daemonset-job which will behave as daemon set and job

Difference between pushing a docker image and installing helm image

I need to learn a CI pipeline in which there is a step for building and pushing an image using a Dockerfile and another step for creating a helm chart image in which there is a definition of the image created by the docker file. After that, there's a CD pipeline in which there's an installation of what was created by the helm chart only.
What is the difference between the image created directly by a Dockerfile and the one which is created by the helm chart? Why isn't the Docker image enough?
Amount to effort
To deploy a service on Kubernetes using docker image you need to manually create various configuration files like deployment.yaml. Such files keep on increasing as you have more and more services added to your environment.
In the Helm chart, we can provide a list of all services that we wish to deploy in requirements.yaml file and Helm will ensure that all those services get deployed to the target environment using deployment.yaml, service.yaml & values.yaml files.
Configurations to maintain
Also adding configuration like routing, config maps, secrets, etc becomes manually and requires configuration over-&-above your service deployment.
For example, if you want to add an Nginx proxy to your environment, you need to separately deploy it using the Nginx image and all the proxy configurations for your functional services.
But with Helm charts, this can be achieved by configuring just one file within your Helm chart: ingress.yaml
Flexibility
Using docker images, we need to provide configurations for each environment where we want to deploy our services.
But using the Helm chart, we can just override the properties of the existing helm chart using the environment-specific values.yaml file. This becomes even easier using tools like ArgoCD.
Code-Snippet:
Below is one example of deployment.yaml file that we need to create if we want to deploy one service using docker-image.
Inline, I have also described how you could alternatively populate a generic deployment.yaml template in Helm repository using different files like requirements.yaml and Values.yaml
deployment.yaml for one service
crazy-project/charts/accounts/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: accounts
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: accounts
app.kubernetes.io/instance: crazy-project
template:
metadata:
labels:
app.kubernetes.io/name: accounts
app.kubernetes.io/instance: crazy-project
spec:
serviceAccountName: default
automountServiceAccountToken: true
imagePullSecrets:
- name: regcred
containers:
- image: "image.registry.host/.../accounts:1.2144.0" <-- This version can be fetched from 'requirements.yaml'
name: accounts
env: <-- All the environment variables can be fetched from 'Values.yaml'
- name: CLUSTERNAME
value: "com.company.cloud"
- name: DB_URI
value: "mongodb://connection-string&replicaSet=rs1"
imagePullPolicy: IfNotPresent
volumeMounts:
- name: secretfiles
mountPath: "/etc/secretFromfiles"
readOnly: true
- name: secret-files
mountPath: "/etc/secretFromfiles"
readOnly: true
ports:
- name: HTTP
containerPort: 9586
protocol: TCP
resources:
requests:
memory: 450Mi
cpu: 250m
limits:
memory: 800Mi
cpu: 1
volumes:
- name: secretFromfiles
secret:
secretName: secret-from-files
- name: secretFromValue
secret:
secretName: secret-data-vault
optional: true
items:...
Your deployment.yaml in Helm chart could be a generic template(code-snippet below) where the details are populated using values.yaml file.
env:
{{- range $key, $value := .Values.global.envVariable.common }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
Your Values.yaml would look like this:
accounts:
imagePullSecrets:
- name: regcred
envVariable:
service:
vars:
spring_data_mongodb_database: accounts_db
spring_product_name: crazy-project
...
Your requirements.yaml would be like below. 'dependencies' are the services that you wish to deploy.
dependencies:
- name: accounts
repository: "<your repo>"
version: "= 1.2144.0"
- name: rollover
repository: "<your repo>"
version: "= 1.2140.0"
The following diagram will help you visualize what I have mentioned above:

How do I run create multiple container and run different command inside using k8s

I have a Kubernetes Job, job.yaml :
---
apiVersion: v1
kind: Namespace
metadata:
name: my-namespace
---
apiVersion: batch/v1
kind: Job
metadata:
name: my-job
namespace: my-namespace
spec:
template:
spec:
containers:
- name: my-container
image: gcr.io/project-id/my-image:latest
command: ["sh", "run-vpn-script.sh", "/to/download/this"] # need to run this multiple times
securityContext:
privileged: true
allowPrivilegeEscalation: true
restartPolicy: Never
I need to run command for different parameters. I have like 30 parameters to run. I'm not sure what is the best solution here. I'm thinking to create container in a loop to run all parameters. How can I do this? I want to run the commands or containers all simultaneously.
Some of the ways that you could do it outside of the solutions proposed in other answers are following:
With a templating tool like Helm where you would template the exact specification of your workload and then iterate over it with different values (see the example)
Use the Kubernetes official documentation on work queue topics:
Indexed Job for Parallel Processing with Static Work Assignment - alpha
Parallel Processing using Expansions
Helm example:
Helm in short is a templating tool that will allow you to template your manifests (YAML files). By that you could have multiple instances of Jobs with different name and a different command.
Assuming that you've installed Helm by following guide:
Helm.sh: Docs: Intro: Install
You can create an example Chart that you will modify to run your Jobs:
helm create chart-name
You will need to delete everything that is in the chart-name/templates/ and clear the chart-name/values.yaml file.
After that you can create your values.yaml file which you will iterate upon:
jobs:
- name: job1
command: ['"perl", "-Mbignum=bpi", "-wle", "print bpi(3)"']
image: perl
- name: job2
command: ['"perl", "-Mbignum=bpi", "-wle", "print bpi(20)"']
image: perl
templates/job.yaml
{{- range $jobs := .Values.jobs }}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ $jobs.name }}
namespace: default # <-- FOR EXAMPLE PURPOSES ONLY!
spec:
template:
spec:
containers:
- name: my-container
image: {{ $jobs.image }}
command: {{ $jobs.command }}
securityContext:
privileged: true
allowPrivilegeEscalation: true
restartPolicy: Never
---
{{- end }}
If you have above files created you can run following command on what will be applied to the cluster beforehand:
$ helm template . (inside the chart-name folder)
---
# Source: chart-name/templates/job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: job1
namespace: default
spec:
template:
spec:
containers:
- name: my-container
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(3)"]
securityContext:
privileged: true
allowPrivilegeEscalation: true
restartPolicy: Never
---
# Source: chart-name/templates/job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: job2
namespace: default
spec:
template:
spec:
containers:
- name: my-container
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(20)"]
securityContext:
privileged: true
allowPrivilegeEscalation: true
restartPolicy: Never
A side note #1!
This example will create X amount of Jobs where each one will be separate from the other. Please refer to the documentation on data persistency if the files that are downloaded are needed to be stored persistently (example: GKE).
A side note #2!
You can also add your namespace definition in the templates (templates/namespace.yaml) so it will be created before running your Jobs.
You can also run above Chart by:
$ helm install chart-name . (inside the chart-name folder)
After that you should be seeing 2 Jobs that are completed:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
job1-2dcw5 0/1 Completed 0 82s
job2-9cv9k 0/1 Completed 0 82s
And the output that they've created:
$ echo "one:"; kubectl logs job1-2dcw5; echo "two:"; kubectl logs job2-9cv9k
one:
3.14
two:
3.1415926535897932385
Additional resources:
Stackoverflow.com: Questions: Kubernetes creation of multiple deployment with one deployment file
In simpler terms , you want to run multiple commands , following is a sample format to execute multiple commands in a pod :
command: ["/bin/bash","-c","touch /foo && echo 'here' && ls /"]
When we apply this logic to your requirement for two different operations
command: ["sh", "-c", "run-vpn-script.sh /to/download/this && run-vpn-script.sh /to/download/another"]
If you want to run the same command multiple times you can deploy the same YAML multiple times by just changing the name.
You can go with the sed command for replacing the values in YAML and apply those YAML to the cluster for creating the container.
Example job.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: my-namespace
---
apiVersion: batch/v1
kind: Job
metadata:
name: my-job
namespace: my-namespace
spec:
template:
spec:
containers:
- name: my-container
image: gcr.io/project-id/my-image:latest
command: COMMAND # need to run this multiple times
securityContext:
privileged: true
allowPrivilegeEscalation: true
restartPolicy: Never
command :
'job.yaml | sed -i "s,COMMAND,["sh", "run-vpn-script.sh", "/to/download/this"],"
so the above command will replace all the values in YAML and you can apply the YAML to the cluster for creating the container. Same you can apply for other variables.
You can pass the different parameters as per the need in the command that got set in the YAML.
You can also deploy the multiple jobs using the command also
kubectl create job test-job --from=cronjob/a-cronjob
https://www.mankier.com/1/kubectl-create-job
pass other param as per need into the command.
If you don't just want to run the POD you can also try
kubectl run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>
https://jamesdefabia.github.io/docs/user-guide/kubectl/kubectl_run/

Deploying a specific image tag in OpenShift Origin from image stream

I have configured my Gitlab CI pipelines so that they build an OCI image with Docker-in-Docker and upload it to Gitlab's own registry.
Now, I want to deploy images built in my CI pipelines to OpenShift Origin. All images in the registry are tagged with $CI_COMMIT_SHORT_SHA (i.e.: I do not use "latest").
How can I do that?
This is what I have tried so far:
before_script:
- oc login --server="$OPENSHIFT_SERVER" --token="$OPENSHIFT_TOKEN"
- oc project myproject
script:
- oc tag registry.gitlab.com/myproject/backend:$CI_COMMIT_SHORT_SHA backend:$CI_COMMIT_SHORT_SHA
- oc import-image backend:$CI_COMMIT_SHORT_SHA
- oc set image dc/backend backend=myproject/backend:$CI_COMMIT_SHORT_SHA
- oc rollout latest backend
Everything seems to work fine until oc set image. I would expect it to change the deployment configuration to use the specified image tag ($CI_COMMIT_SHORT_SHA), but it seems the configuration is not really modified and so, the rollout still deploys the old (previous) image.
What am I missing? Is there a better way to deploy a specific tag from a private registry?
Update
Here is my deployment configuration:
kind: DeploymentConfig
apiVersion: apps.openshift.io/v1
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
selfLink: /apis/apps.openshift.io/v1/namespaces/myproject/deploymentconfigs/backend
resourceVersion: '38635053'
name: backend
uid: 02809a3d-...
creationTimestamp: '2019-10-14T23:04:43Z'
generation: 7
namespace: myproject
labels:
app: backend
spec:
strategy:
type: Rolling
rollingParams:
updatePeriodSeconds: 1
intervalSeconds: 1
timeoutSeconds: 600
maxUnavailable: 25%
maxSurge: 25%
resources: {}
activeDeadlineSeconds: 21600
triggers:
- type: ConfigChange
- type: ImageChange
imageChangeParams:
automatic: true
containerNames:
- backend
from:
kind: ImageStreamTag
namespace: myproject
name: 'backend:094971ea'
lastTriggeredImage: >-
registry.gitlab.com/myproject/backend#sha256:ebce...
replicas: 1
revisionHistoryLimit: 10
test: false
selector:
app: backend
deploymentconfig: backend
template:
metadata:
creationTimestamp: null
labels:
app: backend
deploymentconfig: backend
annotations:
openshift.io/generated-by: OpenShiftNewApp
spec:
containers:
- name: backend
image: >-
registry.gitlab.com/myproject/backend#sha256:ebce...
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
schedulerName: default-scheduler
status:
observedGeneration: 7
details:
message: image change
causes:
- type: ImageChange
imageTrigger:
from:
kind: DockerImage
name: >-
registry.gitlab.com/myproject/backend#sha256:ebce...
availableReplicas: 1
unavailableReplicas: 0
latestVersion: 4
updatedReplicas: 1
conditions:
- type: Available
status: 'True'
lastUpdateTime: '2019-10-14T23:57:51Z'
lastTransitionTime: '2019-10-14T23:57:51Z'
message: Deployment config has minimum availability.
- type: Progressing
status: 'True'
lastUpdateTime: '2019-10-16T20:09:20Z'
lastTransitionTime: '2019-10-16T20:09:17Z'
reason: NewReplicationControllerAvailable
message: replication controller "backend-4" successfully rolled out
replicas: 1
readyReplicas: 1
One way to "solve" this is that the ImageChange trigger listen to something other than a specific commit id. Some logical name that does not exist as a tag in docker. Say "default".
If you do that then in your script the only thing you need to do is
- oc tag registry.gitlab.com/myproject/backend:$CI_COMMIT_SHORT_SHA backend:default
OpenShift will then take care of updating the image in the DeploymentConfig and rolling out a new deploy for you.
OP asked for a reason why not using latest. Latest is kind of "magical" in that if you push to a image in a registry without a tag it will name that tag latest. This makes it very easy to overwrite it by accident.
So lets say you use "latest" as the tag that you listen to in the ImageStream. What happends if somebody imports the imageStream? It will fetch the latest tag an overwrite what you have manually tagged.
If you want this kind of control in your pipeline use a ImageStreamTag name that does not exist in your docker registry like I said above.

How to avoid repeating GUID in deployment definition

apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: app-2993d9d2-5cb4-4f4c-a9f3-ec630036f5d0
spec:
selector:
matchLabels:
client: 2993d9d2-5cb4-4f4c-a9f3-ec630036f5d0
template:
metadata:
labels:
client: 2993d9d2-5cb4-4f4c-a9f3-ec630036f5d0
spec:
containers:
- name: xxx
image: xxx
env:
- name: GUID
valueFrom:
fieldRef:
fieldPath: spec.template.metadata.labels.client
I tried passing existing value from the definition to the env variable using different expressions and all of them didnt work:
error converting fieldPath: field label not supported: spec.template.metadata.labels.client
upd: found what you can pass in, doesnt help...
I have to essentially repeat myself 4 times, is there a way to have less repeating in the pod definition to ease management? According to this you can pass in something, it doesnt say what though.
ps. Do i really need same guid in the spec.template and spec.selector? It doesnt work without that
You don’t necessarily need to use guids here, those are just lables and names...
Secondly, they refer to different things (althought some of them have to be the same in some cases):
metadata name is name of Deployment in question. You will use it to reference and manipulator this specific Deployment during its lifecycle.
labels and matchlabels need to be the same if you want them matched together, which in this case you want. Kubernetes is strong and quite flexible when it comes to labeling and different assets can have multiple labels on them (say pod can have labels: app:Postfix, tier: backend, layer: mysql, env:dev). It stands to reason that label(s) that you want matched and label(s) to be matched have to be the same in order to be matched.
As for automation of labeling in Deployment to avoid repetition, maybe helm Charts or some other ‘automating kubernetes’ approach, depending on your actual need, would be better?
Additional note: for passing label to env variable following can be used starting from kubernetes 1.9:
...
template:
metadata:
labels:
label_name: label-value
...
env:
- name: ENV_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['label_name']
Below is full mock code to demonstrate this (client 1.9.3, server 1.9.0):
# cat d.yaml:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: app-guidhere
spec:
selector:
matchLabels:
client: guidhere
template:
metadata:
labels:
client: guidhere
spec:
containers:
- name: some-name
image: nginx
env:
- name: GUIDENV
valueFrom:
fieldRef:
fieldPath: metadata.labels['client']
# after: kubectl create -f d.yaml and connecting to container
# echo $GUIDENV responds with "guidhere"
And I've just tried this and works correctly (mind k8s versions).

Resources