Unable to use GCS bucket for helm/based kubernetes - jenkins

Using official / stable jenkins helm release to install the chart on kubernetes.
Using a GCS bucket as destination in the corresponding section of the values.yaml file
backup:
enabled: true
# Used for label app.kubernetes.io/component
componentName: "jenkins-backup"
schedule: "0 2 * * *"
labels: {}
annotations: {}
image:
repository: "maorfr/kube-tasks"
tag: "0.2.0"
extraArgs: []
# Add existingSecret for AWS credentials
existingSecret: {}
env: []
resources:
requests:
memory: 1Gi
cpu: 1
limits:
memory: 1Gi
cpu: 1
# Destination to store the backup artifacts
# Supported cloud storage services: AWS S3, Minio S3, Azure Blob Storage, Google Cloud Storage
# Additional support can added. Visit this repository for details
# Ref: https://github.com/maorfr/skbn
destination: "gs://jenkins-backup-240392409"
However, when the backup job starts, I get the following in its logs:
gs not implemented
edit: To address the issue raised by #Maxim in a comment below, the pod's description indicates that the quotes do not end up in the backup command
Pod Template:
Labels: <none>
Service Account: my-service-account
Containers:
jenkins-backup:
Image: maorfr/kube-tasks:0.2.0
Port: <none>
Host Port: <none>
Command:
kube-tasks
Args:
simple-backup
-n
jenkins
-l
app.kubernetes.io/instance=my-jenkins
--container
jenkins
--path
/var/jenkins_home
--dst
gs://my-destination-backup-bucket-6266

you should change the "gs" in destination to "gcs":
destination: "gcs://jenkins-backup-240392409"
however you can use ThinBackup plugin in jenkins and the backup is straighforward. check this guide for full instructions and walkthrough.

Related

Add a Persistent Volume Claim to a Kubernetes Dask Cluster

I am running a Dask cluster and a Jupyter notebook server on cloud resources using Kubernetes and Helm
I am using a yaml file for the Dask cluster and Jupyter, initially taken from https://docs.dask.org/en/latest/setup/kubernetes-helm.html:
apiVersion: v1
kind: Pod
worker:
replicas: 2 #number of workers
resources:
limits:
cpu: 2
memory: 2G
requests:
cpu: 2
memory: 2G
env:
- name: EXTRA_PIP_PACKAGES
value: s3fs --upgrade
# We want to keep the same packages on the workers and jupyter environments
jupyter:
enabled: true
env:
- name: EXTRA_PIP_PACKAGES
value: s3fs --upgrade
resources:
limits:
cpu: 1
memory: 2G
requests:
cpu: 1
memory: 2G
an I am using another yaml file to create the storage locally.
#CREATE A PERSISTENT VOLUME CLAIM // attached to our pod config
apiVersion: 1
kind: PersistentVolumeClaim
metadata:
name: dask-cluster-persistent-volume-claim
spec:
accessModes:
- ReadWriteOne #can be used by a single node -ReadOnlyMany : for multiple nodes -ReadWriteMany: read/written to/by many nodes
ressources:
requests:
storage: 2Gi # storage capacity
I would like to add a persistent volume claim to the first yaml file, I couldn't figure out where the add volumes and volumeMounts.
if you have an idea, please share it, thank you
I started by creating a pvc claim with the YAML file:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pdask-cluster-persistent-volume-claim
spec:
accessModes:
- ReadWriteOnce #can be used by a single node -ReadOnlyMany : for multiple nodes -ReadWriteMany: read/written to/by many nodes
resources: # https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
requests:
storage: 2Gi
with lunching in bash:
kubectl apply -f Dask-Persistent-Volume-Claim.yaml
#persistentvolumeclaim/pdask-cluster-persistent-volume-claim created
I checked the creation of persitent volume:
kubectl get pv
I made major changes to the Dask cluster YAML: I added the volumes and volumeMounts where I read/write from a directory /data from the persistent volume created previously, I specified ServiceType to LoadBalancer with port:
apiVersion: v1
kind: Pod
scheduler:
name: scheduler
enabled: true
image:
repository: "daskdev/dask"
tag: 2021.8.1
pullPolicy: IfNotPresent
replicas: 1 #(should always be 1).
serviceType: "LoadBalancer" # Scheduler service type. Set to `LoadBalancer` to expose outside of your cluster.
# serviceType: "NodePort"
# serviceType: "ClusterIP"
#loadBalancerIP: null # Some cloud providers allow you to specify the loadBalancerIP when using the `LoadBalancer` service type. If your cloud does not support it this option will be ignored.
servicePort: 8786 # Scheduler service internal port.
# DASK WORKERS
worker:
name: worker # Dask worker name.
image:
repository: "daskdev/dask" # Container image repository.
tag: 2021.8.1 # Container image tag.
pullPolicy: IfNotPresent # Container image pull policy.
dask_worker: "dask-worker" # Dask worker command. E.g `dask-cuda-worker` for GPU worker.
replicas: 2
resources:
limits:
cpu: 2
memory: 2G
requests:
cpu: 2
memory: 2G
mounts: # Worker Pod volumes and volume mounts, mounts.volumes follows kuberentes api v1 Volumes spec. mounts.volumeMounts follows kubernetesapi v1 VolumeMount spec
volumes:
- name: dask-storage
persistentVolumeClaim:
claimName: pvc-dask-data
volumeMounts:
- name: dask-storage
mountPath: /save_data # folder for storage
env:
- name: EXTRA_PIP_PACKAGES
value: s3fs --upgrade
# We want to keep the same packages on the worker and jupyter environments
jupyter:
name: jupyter # Jupyter name.
enabled: true # Enable/disable the bundled Jupyter notebook.
#rbac: true # Create RBAC service account and role to allow Jupyter pod to scale worker pods and access logs.
image:
repository: "daskdev/dask-notebook" # Container image repository.
tag: 2021.8.1 # Container image tag.
pullPolicy: IfNotPresent # Container image pull policy.
replicas: 1 # Number of notebook servers.
serviceType: "LoadBalancer" # Scheduler service type. Set to `LoadBalancer` to expose outside of your cluster.
# serviceType: "NodePort"
# serviceType: "ClusterIP"
servicePort: 80 # Jupyter service internal port.
# This hash corresponds to the password 'dask'
#password: 'sha1:aae8550c0a44:9507d45e087d5ee481a5ce9f4f16f37a0867318c' # Password hash.
env:
- name: EXTRA_PIP_PACKAGES
value: s3fs --upgrade
resources:
limits:
cpu: 1
memory: 2G
requests:
cpu: 1
memory: 2G
mounts: # Worker Pod volumes and volume mounts, mounts.volumes follows kuberentes api v1 Volumes spec. mounts.volumeMounts follows kubernetesapi v1 VolumeMount spec
volumes:
- name: dask-storage
persistentVolumeClaim:
claimName: pvc-dask-data
volumeMounts:
- name: dask-storage
mountPath: /save_data # folder for storage
Then, I install my Daskconfiguration using helm:
helm install my-config dask/dask -f values.yaml
Finally, I accessed my jupyter interactively:
kubectl exec -ti [pod-name] -- /bin/bash
to examine the existence of the /data folder

AKS - How to mount volume with file for pod/image

I am kind of new to AKS deployment with volume mount. I want to create a pod in AKS with image; that image needs a volume mount with config.yaml file (that I already have and needs to be passed to that image to run successfully).
Below is the docker command that is working on local machine.
docker run -v <Absolute_path_of_config.yaml>:/config.yaml image:tag
I want to achieve same thing in AKS. When I tried to deploy same using Azure File Mount (with PersistentVolumeClaim) volume is getting attached. The question now is how to pass config.yaml file to that pod. I tried uploading config.yaml file to Azure File Share Volume that is attached in POD deployment without any success.
Below is the pod deployment file that I used
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: mypod
image: image:tag
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 1Gi
volumeMounts:
- mountPath: "/config.yaml"
name: volume
volumes:
- name: volume
persistentVolumeClaim:
claimName: my-azurefile-storage
Need help regarding how I can use that local config.yaml file for aks deployment so image can run properly.
Thanks in advance.
Create a kubernetes secret using config.yaml file.
kubectl create secret generic config-yaml --from-file=config.yaml
Mount it as a volume in the pod.
apiVersion: v1
kind: Pod
metadata:
name: config
spec:
containers:
- name: config
image: alpine
command:
- cat
resources: {}
tty: true
volumeMounts:
- name: config
mountPath: /config.yaml
subPath: config.yaml
volumes:
- name: config
secret:
secretName: config-yaml
Exec to the pod and view the file.
kubectl exec -it config sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # ls
bin dev home media opt root sbin sys usr
config.yaml etc lib mnt proc run srv tmp var
/ # cat config.yaml
---
apiUrl: "https://my.api.com/api/v1"
username: admin
password: password

Kubernetes - how to reference file share in order to mount volume?

we plan to use Azure Kubernetes Service for K8S. We have our Azure File Share.
Is it possible to reference somehow Azure File Share within the Pod or Deployment yaml definition so that volume can be mounted on the container (Pod) level? Is this reference something which needs to be defined during the AKS cluster creation or it is enough to reference it somehow when we execute kubectl apply command to deploy our containers Pods.
Thanks
So, as per Mount the file share as a volume documentation, provided by #AndreyDonald, you can reference like this
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
name: mypod
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
volumeMounts:
- name: azure
mountPath: /mnt/azure
volumes:
- name: azure
azureFile:
secretName: azure-secret
shareName: aksshare
readOnly: false
But prior to that you should Create a Kubernetes secret.
kubectl create secret generic azure-secret --from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME --from-literal=azurestorageaccountkey=$STORAGE_KEY
And in order to create that secret you should use assign correct values to vars
$AKS_PERS_STORAGE_ACCOUNT_NAME
$STORAGE_KEY
You dont have to create new File Share, just
# Get storage account key
STORAGE_KEY=$(az storage account keys list --resource-group $AKS_PERS_RESOURCE_GROUP --account-name $AKS_PERS_STORAGE_ACCOUNT_NAME --query "[0].value" -o tsv)
and use it.

CI/CD update Multicontainer Pod

I'm trying to build a Multi-container POD via a Pipeline and release via Helm Charts.
For a single container pod I can do this which works, pass the version and the location of the container to the helm chart:
helm upgrade --install \
--set image.repository=${CI_REGISTRY}/${ENVIRONMENT,,}/${CI_PROJECT_NAME} \
--set image.tag=${CI_COMMIT_SHA} \
${CI_PROJECT_NAME} \
How do I pass a version or a location for a specific container if the help chart is a multi container pod ?
containers:
- repo: myrepo/qa/helloworld1
tag: e2fd70931d264490b2d25012e884897f970f5916
pullPolicy: Always
ports:
container: 8090
livenessProbe:
initialDelaySeconds: 6
tcpSocket:
port: 8090
resources:
requests:
memory: 128Mi
cpu: 50m
limits:
memory: 128Mi
cpu: 100m
- repo: myrepo/qa/helloworld2
tag: 6bb39948f2a5f926f7968480435ec39a4e07e721
pullPolicy: Always
ports:
container: 8080
livenessProbe:
initialDelaySeconds: 6
tcpSocket:
port: 8080
resources:
requests:
memory: 128Mi
cpu: 50m
limits:
memory: 128Mi
cpu: 100m
That depends on your helm chart. The reason you can pass the image.tag and image.repository sections is because inside of the helm chart templates there is a section specifying the following:
containers:
- image: {{ .Values.image.repository }}/app-name:{{ .Values.image.tag }}
Helm templates a deployment.yaml. By default it replaces each of the Values in a chart with whatever defaults where specified in the values.yaml file that is part of that chart. Whenever you run a helm command such as helm install or helm upgrade --install and specify the --set flag, you are overriding the defaults specified in the values.yaml. See the docs on helm upgrade for more info on overriding the values in a chart.
To answer your question: it depends on how that chart is defined. What you often see is that in the values.yaml of a multi-container pod you define two sets of images, e.g.:
# values.yaml
image1:
tag: <sha-here>
repository: <repo-here>
image2:
tag: <sha-here>
repository: <repo-here>
and in the chart you can then refer to those values by specifying:
containers:
- image: {{ .Values.image1.repository }}/app-name:{{ .Values.image1.tag }}
However, it depends on your specific Helm chart where you specify these values. Are you able to update your Helm Chart? Or is it an external Chart?

Unable to setup docker private registry with persistent storage on kubernetes with helm

I am trying to set up a docker private registry on kubernetes cluster with helm. But I am getting an error for pvc. The error is:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 22m default-scheduler Successfully assigned docker-reg/docker-private-registry-docker-registry-6454b85dbb-zpdjc to 192.168.1.19
Warning FailedMount 2m10s (x9 over 20m) kubelet, 192.168.1.19 Unable to mount volumes for pod "docker-private-registry-docker-registry-6454b85dbb-zpdjc_docker-reg(82c8be80-eb43-11e8-85c9-b06ebfd124ff)": timeout expired waiting for volumes to attach or mount for pod "docker-reg"/"docker-private-registry-docker-registry-6454b85dbb-zpdjc". list of unmounted volumes=[data]. list of unattached volumes=[auth data docker-private-registry-docker-registry-config default-token-xc4p7]
What might be the reason for this error? I've also tried to create a pvc first and then use the existing pvc with docker registry's helm but it gives the same error.
Steps:
Create a htpasswd file
Edit values.yml and add contents of htpasswd file to htpasswd key.
Modify values.yml to enable persistence
Run helm install stable/docker-registry --namespace docker-reg --name docker-private-registry --values helm-docker-reg/values.yml
values.yml file:
# Default values for docker-registry.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
updateStrategy:
# type: RollingUpdate
# rollingUpdate:
# maxSurge: 1
# maxUnavailable: 0
podAnnotations: {}
image:
repository: registry
tag: 2.6.2
pullPolicy: IfNotPresent
# imagePullSecrets:
# - name: docker
service:
name: registry
type: ClusterIP
# clusterIP:
port: 5000
# nodePort:
annotations: {}
# foo.io/bar: "true"
ingress:
enabled: false
path: /
# Used to create an Ingress record.
hosts:
- chart-example.local
annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
tls:
# Secrets must be manually created in the namespace.
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
persistence:
accessMode: 'ReadWriteOnce'
enabled: true
size: 10Gi
storageClass: 'rook-ceph-block'
# set the type of filesystem to use: filesystem, s3
storage: filesystem
# Set this to name of secret for tls certs
# tlsSecretName: registry.docker.example.com
secrets:
haSharedSecret: ""
htpasswd: "dasdma:$2y$05$bnLaYEdTLawodHz2ULzx2Ob.OUI6wY6bXr9WUuasdwuGZ7TIsTK2W"
# Secrets for Azure
# azure:
# accountName: ""
# accountKey: ""
# container: ""
# Secrets for S3 access and secret keys
# s3:
# accessKey: ""
# secretKey: ""
# Secrets for Swift username and password
# swift:
# username: ""
# password: ""
# Options for s3 storage type:
# s3:
# region: us-east-1
# bucket: my-bucket
# encrypt: false
# secure: true
# Options for swift storage type:
# swift:
# authurl: http://swift.example.com/
# container: my-container
configData:
version: 0.1
log:
fields:
service: registry
storage:
cache:
blobdescriptor: inmemory
http:
addr: :5000
headers:
X-Content-Type-Options: [nosniff]
health:
storagedriver:
enabled: true
interval: 10s
threshold: 3
securityContext:
enabled: true
runAsUser: 1000
fsGroup: 1000
priorityClassName: ""
nodeSelector: {}
tolerations: []
It's working now. The issue was with the openebs storage which was documented here - https://docs.openebs.io/docs/next/tsgiscsi.html

Resources