CI/CD update Multicontainer Pod - docker

I'm trying to build a Multi-container POD via a Pipeline and release via Helm Charts.
For a single container pod I can do this which works, pass the version and the location of the container to the helm chart:
helm upgrade --install \
--set image.repository=${CI_REGISTRY}/${ENVIRONMENT,,}/${CI_PROJECT_NAME} \
--set image.tag=${CI_COMMIT_SHA} \
${CI_PROJECT_NAME} \
How do I pass a version or a location for a specific container if the help chart is a multi container pod ?
containers:
- repo: myrepo/qa/helloworld1
tag: e2fd70931d264490b2d25012e884897f970f5916
pullPolicy: Always
ports:
container: 8090
livenessProbe:
initialDelaySeconds: 6
tcpSocket:
port: 8090
resources:
requests:
memory: 128Mi
cpu: 50m
limits:
memory: 128Mi
cpu: 100m
- repo: myrepo/qa/helloworld2
tag: 6bb39948f2a5f926f7968480435ec39a4e07e721
pullPolicy: Always
ports:
container: 8080
livenessProbe:
initialDelaySeconds: 6
tcpSocket:
port: 8080
resources:
requests:
memory: 128Mi
cpu: 50m
limits:
memory: 128Mi
cpu: 100m

That depends on your helm chart. The reason you can pass the image.tag and image.repository sections is because inside of the helm chart templates there is a section specifying the following:
containers:
- image: {{ .Values.image.repository }}/app-name:{{ .Values.image.tag }}
Helm templates a deployment.yaml. By default it replaces each of the Values in a chart with whatever defaults where specified in the values.yaml file that is part of that chart. Whenever you run a helm command such as helm install or helm upgrade --install and specify the --set flag, you are overriding the defaults specified in the values.yaml. See the docs on helm upgrade for more info on overriding the values in a chart.
To answer your question: it depends on how that chart is defined. What you often see is that in the values.yaml of a multi-container pod you define two sets of images, e.g.:
# values.yaml
image1:
tag: <sha-here>
repository: <repo-here>
image2:
tag: <sha-here>
repository: <repo-here>
and in the chart you can then refer to those values by specifying:
containers:
- image: {{ .Values.image1.repository }}/app-name:{{ .Values.image1.tag }}
However, it depends on your specific Helm chart where you specify these values. Are you able to update your Helm Chart? Or is it an external Chart?

Related

Build Kubernetes cluster with spark master and spark workers

I've built a custom-spark docker image with the following dependencies:
Python 3.6.9
Pip 1.18
Java OpenJDK 64-Bit Server VM, 1.8.0_212
Hadoop 3.2
Scala 2.13.0
Spark 3.0.3
where I pushed to ducker hub: https://hub.docker.com/r/redaer7/custom-spark
Dockerfile,spark-master and spark-worker files are stored under: https://github.com/redaER7/Custom-Spark
I verify /spark-master and /spark-worker works well when creating a container linked to the previous image:
docker run -it -d --name spark_1 redaer7/custom-spark:1.0 bash
docker exec -it $CONTAINER_ID /bin/bash
My issue is when I try to build a K8s cluster from previous image with following yaml file for the spark master pod:
kubectl create namespace sparkspace
kubectl -n sparkspace create -f ./spark-master-deployment.yaml
#yaml file
kind: Deployment
apiVersion: apps/v1
metadata:
name: spark-master
spec:
replicas: 1 # should always be one
selector:
matchLabels:
component: spark-master
template:
metadata:
labels:
component: spark-master
spec:
containers:
- name: spark-master
image: redaer7/custom-spark:1.0
imagePullPolicy: IfNotPresent
command: ["/spark-master"]
ports:
- containerPort: 7077
- containerPort: 8080
resources:
# limits:
# cpu: 1
# memory: 1G
requests:
cpu: 1 #100m
memory: 1G
I get CrashLoopBackOff when viewing pod with kubectl -n sparkspace get pods
When inspecting with kubectl -n sparkspace describe pod $Pod_Name
Any clue about that First warning ? thank you
I simply solved it by re-pulling the image :
imagePullPolicy: Always
Because I edited the Docker Image locally and I haven't changed the following in the config file:
imagePullPolicy: IfNotPresent
Then, I pushed it into Dockerhub for later deployment

Add a Persistent Volume Claim to a Kubernetes Dask Cluster

I am running a Dask cluster and a Jupyter notebook server on cloud resources using Kubernetes and Helm
I am using a yaml file for the Dask cluster and Jupyter, initially taken from https://docs.dask.org/en/latest/setup/kubernetes-helm.html:
apiVersion: v1
kind: Pod
worker:
replicas: 2 #number of workers
resources:
limits:
cpu: 2
memory: 2G
requests:
cpu: 2
memory: 2G
env:
- name: EXTRA_PIP_PACKAGES
value: s3fs --upgrade
# We want to keep the same packages on the workers and jupyter environments
jupyter:
enabled: true
env:
- name: EXTRA_PIP_PACKAGES
value: s3fs --upgrade
resources:
limits:
cpu: 1
memory: 2G
requests:
cpu: 1
memory: 2G
an I am using another yaml file to create the storage locally.
#CREATE A PERSISTENT VOLUME CLAIM // attached to our pod config
apiVersion: 1
kind: PersistentVolumeClaim
metadata:
name: dask-cluster-persistent-volume-claim
spec:
accessModes:
- ReadWriteOne #can be used by a single node -ReadOnlyMany : for multiple nodes -ReadWriteMany: read/written to/by many nodes
ressources:
requests:
storage: 2Gi # storage capacity
I would like to add a persistent volume claim to the first yaml file, I couldn't figure out where the add volumes and volumeMounts.
if you have an idea, please share it, thank you
I started by creating a pvc claim with the YAML file:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pdask-cluster-persistent-volume-claim
spec:
accessModes:
- ReadWriteOnce #can be used by a single node -ReadOnlyMany : for multiple nodes -ReadWriteMany: read/written to/by many nodes
resources: # https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
requests:
storage: 2Gi
with lunching in bash:
kubectl apply -f Dask-Persistent-Volume-Claim.yaml
#persistentvolumeclaim/pdask-cluster-persistent-volume-claim created
I checked the creation of persitent volume:
kubectl get pv
I made major changes to the Dask cluster YAML: I added the volumes and volumeMounts where I read/write from a directory /data from the persistent volume created previously, I specified ServiceType to LoadBalancer with port:
apiVersion: v1
kind: Pod
scheduler:
name: scheduler
enabled: true
image:
repository: "daskdev/dask"
tag: 2021.8.1
pullPolicy: IfNotPresent
replicas: 1 #(should always be 1).
serviceType: "LoadBalancer" # Scheduler service type. Set to `LoadBalancer` to expose outside of your cluster.
# serviceType: "NodePort"
# serviceType: "ClusterIP"
#loadBalancerIP: null # Some cloud providers allow you to specify the loadBalancerIP when using the `LoadBalancer` service type. If your cloud does not support it this option will be ignored.
servicePort: 8786 # Scheduler service internal port.
# DASK WORKERS
worker:
name: worker # Dask worker name.
image:
repository: "daskdev/dask" # Container image repository.
tag: 2021.8.1 # Container image tag.
pullPolicy: IfNotPresent # Container image pull policy.
dask_worker: "dask-worker" # Dask worker command. E.g `dask-cuda-worker` for GPU worker.
replicas: 2
resources:
limits:
cpu: 2
memory: 2G
requests:
cpu: 2
memory: 2G
mounts: # Worker Pod volumes and volume mounts, mounts.volumes follows kuberentes api v1 Volumes spec. mounts.volumeMounts follows kubernetesapi v1 VolumeMount spec
volumes:
- name: dask-storage
persistentVolumeClaim:
claimName: pvc-dask-data
volumeMounts:
- name: dask-storage
mountPath: /save_data # folder for storage
env:
- name: EXTRA_PIP_PACKAGES
value: s3fs --upgrade
# We want to keep the same packages on the worker and jupyter environments
jupyter:
name: jupyter # Jupyter name.
enabled: true # Enable/disable the bundled Jupyter notebook.
#rbac: true # Create RBAC service account and role to allow Jupyter pod to scale worker pods and access logs.
image:
repository: "daskdev/dask-notebook" # Container image repository.
tag: 2021.8.1 # Container image tag.
pullPolicy: IfNotPresent # Container image pull policy.
replicas: 1 # Number of notebook servers.
serviceType: "LoadBalancer" # Scheduler service type. Set to `LoadBalancer` to expose outside of your cluster.
# serviceType: "NodePort"
# serviceType: "ClusterIP"
servicePort: 80 # Jupyter service internal port.
# This hash corresponds to the password 'dask'
#password: 'sha1:aae8550c0a44:9507d45e087d5ee481a5ce9f4f16f37a0867318c' # Password hash.
env:
- name: EXTRA_PIP_PACKAGES
value: s3fs --upgrade
resources:
limits:
cpu: 1
memory: 2G
requests:
cpu: 1
memory: 2G
mounts: # Worker Pod volumes and volume mounts, mounts.volumes follows kuberentes api v1 Volumes spec. mounts.volumeMounts follows kubernetesapi v1 VolumeMount spec
volumes:
- name: dask-storage
persistentVolumeClaim:
claimName: pvc-dask-data
volumeMounts:
- name: dask-storage
mountPath: /save_data # folder for storage
Then, I install my Daskconfiguration using helm:
helm install my-config dask/dask -f values.yaml
Finally, I accessed my jupyter interactively:
kubectl exec -ti [pod-name] -- /bin/bash
to examine the existence of the /data folder

Where is Kubernetes in Docker (KIND) mapping its Volume Mounts on windwos 10

Im following the instructions here to install an Elastic Search Cluster on KIND (Kubernetes in Docker). https://www.elastic.co/blog/alpha-helm-charts-for-elasticsearch-kibana-and-cncf-membership
This is running in a 4 node cluster on Docker on Windows 10. Im running into a problem similar to whats reported here: https://github.com/elastic/helm-charts/issues/137
Im trying to figure out where the mounts are so I can CHOWN that directory. Where is this mapped on the local machine?
Im not running WSL2 yet
In order to change the owner of /usr/share/elasticsearch/data/nodes directory you have to create an initContainer that will change the permissions.
You can do it by fetching elasticsearch chart:
helm fetch --untar elasticsearch elastic/elasticsearch
Then change values.yaml and add following lines:
antiAffinity: "soft"
# Shrink default JVM heap.
esJavaOpts: "-Xmx128m -Xms128m"
# Allocate smaller chunks of memory per pod.
resources:
requests:
cpu: "100m"
memory: "512M"
limits:
cpu: "1000m"
memory: "512M"
# Request smaller persistent volumes.
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "hostpath"
resources:
requests:
storage: 100M
extraInitContainers: |
- name: create
image: busybox:1.28
command: ['mkdir', '/usr/share/elasticsearch/data/nodes/']
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: elasticsearch-master
- name: file-permissions
image: busybox:1.28
command: ['chown', '-R', '1000:1000', '/usr/share/elasticsearch/']
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: elasticsearch-master
It changes the cpu and memory requests and limits for the pods and starts initContainer with chown', '-R', '1000:1000', '/usr/share/elasticsearch/' command that changes the permission of the directory.

Unable to use GCS bucket for helm/based kubernetes

Using official / stable jenkins helm release to install the chart on kubernetes.
Using a GCS bucket as destination in the corresponding section of the values.yaml file
backup:
enabled: true
# Used for label app.kubernetes.io/component
componentName: "jenkins-backup"
schedule: "0 2 * * *"
labels: {}
annotations: {}
image:
repository: "maorfr/kube-tasks"
tag: "0.2.0"
extraArgs: []
# Add existingSecret for AWS credentials
existingSecret: {}
env: []
resources:
requests:
memory: 1Gi
cpu: 1
limits:
memory: 1Gi
cpu: 1
# Destination to store the backup artifacts
# Supported cloud storage services: AWS S3, Minio S3, Azure Blob Storage, Google Cloud Storage
# Additional support can added. Visit this repository for details
# Ref: https://github.com/maorfr/skbn
destination: "gs://jenkins-backup-240392409"
However, when the backup job starts, I get the following in its logs:
gs not implemented
edit: To address the issue raised by #Maxim in a comment below, the pod's description indicates that the quotes do not end up in the backup command
Pod Template:
Labels: <none>
Service Account: my-service-account
Containers:
jenkins-backup:
Image: maorfr/kube-tasks:0.2.0
Port: <none>
Host Port: <none>
Command:
kube-tasks
Args:
simple-backup
-n
jenkins
-l
app.kubernetes.io/instance=my-jenkins
--container
jenkins
--path
/var/jenkins_home
--dst
gs://my-destination-backup-bucket-6266
you should change the "gs" in destination to "gcs":
destination: "gcs://jenkins-backup-240392409"
however you can use ThinBackup plugin in jenkins and the backup is straighforward. check this guide for full instructions and walkthrough.

How to overwrite a value during deployment via helm-chart in a multi-container pod?

I have a definition in my values.yaml to deploy 2 containers in one pod.
When running a custom CI/CD pipeline I would like to overwrite the tag(version) of the container which changes.
Normally I would do something like that:
helm upgrade --install app-pod-testing --set container.tag=0.0.2
The values.yaml has 2 containers defined:
containers:
- repo: services/qa/helloworld1
tag: 843df3a1fcc87489d7b52b152c50fc6a9d59744d
pullPolicy: Always
ports:
container: 8080
resources:
limits:
memory: 128Mi
securityContext:
allowPrivilegeEscalation: false
- repo: services/qa/helloword2
tag: bdaf287eaa3a8f9ba89e663ca1c7785894b5128f
pullPolicy: Always
ports:
container: 9080
resources:
limits:
memory: 128Mi
securityContext:
allowPrivilegeEscalation: true
How do I do set to overwrite only tag for repo services/qa/helloword2 during deployment ?
Any help/suggestions appreciate.
Do:
helm upgrade --install app-pod-testing --set containers[1].tag=0.0.2
See Helm docs.
Are you the author of this helm chart? If yes, you can just use different property path for each container in the template.

Resources