GKE Jenkins via Bitnami Helm chart - how to update - jenkins

I've installed Jenkins on GKE using Bitnami Chart and it is online.
When I want to adjust it using helm upgrade, Kubernetes brings up a new instance while leaving the other running (as expected), but the new instance fails to come up with
Warning FailedAttachVolume 18m attachdetach-controller Multi-Attach error for volume "pvc-b3d609b3-ec10-4966-8713-595702220c40" Volume is already used by pod(s) jenkins-9ddcc795c-vflvm
Warning FailedMount 11m kubelet Unable to attach or mount volumes: unmounted volumes=[jenkins-data[], unattached volumes=[default-token-2qsvk jenkins-data]: timed out waiting for the condition
This error makes sense - two instances can't share storage.
If I take down the first instance, then it comes right back. If I helm uninstall, both instances are deleted including the storage.
What is the proper process to upgrade versions/update chart settings?

You can delete the deployment of Jenkin first if you will delete the deployment other components will be there along with the storage disk which can reattached to the new deployment
kubectl delete deployments.apps jenkins
https://artifacthub.io/packages/helm/bitnami/jenkins#upgrading
and run command to upgrade the helm chart by updating the value file and using --set.
helm upgrade jenkins bitnami/jenkins --set jenkinsPassword=$JENKINS_PASSWORD --set jenkinsHome=/bitnami/jenkins/jenkins_home

Related

I'm trying to run OpenWhisk on a Kubernetes cluster, but installation pods produce an error

I'm trying to run Apache OpenWhisk on a Kubernetes cluster running in Docker, as explained in the OpenWhisk documentation, but when I reach the point were I'm supposed to wait until the "install-packages" pods complete, they instead fail or never initalize at all.
This is my .yaml file:
whisk:
ingress:
type: NodePort
apiHostName: localhost
apiHostPort: 31001
useInternally: false
nginx: httpsNodePort: 31001
# A single node cluster; so disable affinity
affinity:
enabled: false
toleration:
enabled: false
invoker:
options: "-Dwhisk.kubernetes.user-pod-node-affinity.enabled=false"
I have Docker and its custer running alright, kubectl has docker-desktop as its context. So I run the following commands:
helm repo add openwhisk https://openwhisk.apache.org/charts
helm repo update
helm install owdev openwhisk/openwhisk -n openwhisk --create-namespace -f mycluster.yaml
Then, kubectl get pods -n openwhisk --watch will show
and eventually the Init:0/1 will turn to Error. Other install-packages will show up but they also eventually find error.
(Of course I've tried using wsk property set --apihost localhost:31001 --auth <auth provided by the docs and then actually trying wsk action create someAction action.js, but that will merely return Unable to create action 'test': Put "https://localhost:31001/api/v1/namespaces/_/actions/test?overwrite=false": x509: certificate is not valid for any names, but wanted to match localhost.)
I've been stuck with this for over a week! Please, any help will do, and thank you!
When pods are stuck like this, there can be several reasons, be sure to start with the simplest ones first.
Make sure your k8s setup has sufficient resources. You need at least 2vCPU and 4GiB memory for this setup as per the documentation.
Make sure you're using the supported Kubernetes version (1.19, 1.20 or 1.21).
If some pods are failing, and you see DNS related issues when looking at kubectl logs -f -n <namespace> <pod-name>. Verify that the apiHost is reachable from inside the cluster. You can use this guide.
Make sure that the PersistentVolumeClaims that the pods have, can be bound to PersistentVolumes in the cluster. Check this with kubectl get pvc -A. You should not see any PVC in state Pending. You most probably do not have dynamic provisioning setup, so you will have to create these volumes yourself. If this is too much, you can simply disable persistence in your OpenWhisk setup with this guide.
I hope this helps!

Using Renovate in Kubernetes like Docker-Compose's Watchtower

While looking for a kubernetes equivalent of the docker-compose watchtower container, I stumbled upon renovate. It seems to be a universal tool to update docker tags, dependencies and more.
They also have an example of how to run the service itself inside kubernetes, and I found this blogpost of how to set renovate up to check kubernetes manifests for updates (?).
Now the puzzle piece that I'm missing is some super basic working example that updates a single pod's image tag, and then figuring out how to deploy that in a kubernetes cluster. I feel like there needs to be an example out there somewhere but I can't find it for the life of me.
To explain watchtower:
It monitors all containers running in a docker compose setup and pulls new versions of images once they are available, updating the containers in the process.
I found one keel which looks like watchtower:
Kubernetes Operator to automate Helm, DaemonSet, StatefulSet & Deployment updates
Alternatively, there is duin
Docker Image Update Notifier is a CLI application written in Go and delivered as a single executable (and a Docker image) to receive notifications when a Docker image is updated on a Docker registry.
The Kubernetes provider allows you to analyze the pods of your Kubernetes cluster to extract images found and check for updates on the registry.
I think there is a confusion regarding what Renovate does.
Renovate updates files inside GIT repositories not on the Kubernetes API server.
The Kubernetes manager which you are probably referencing updates K8 manifests, Helm charts and so on inside of GIT repository.

What happens when kubernetes restarts containers or the cluster is scaled up?

We are using Helm Chart for deploying out application in Kubernetes cluster.
We have a statefulsets and headless service. To initialize mTLS, we have created a 'job' kind and in 'command' we are passing shell & python scripts are an arguments. And created a 'cronjob' kind to update of certificate.
We have written a 'docker-entrypoint.sh' inside 'docker image' for some initialization work & to generate TLS certificates.
Questions to ask :
Who (Helm Chart/ Kubernetes) take care of scaling/monitoring/restarting containers ?
Does it deploy new docker image if pod fails/restarts ?
Will docker ENTRYPOINT execute after container fails/restarts ?
Does 'job' & 'cronjob' execute if container restarts ?
What are the other steps taken by Kubernetes ? Would you also share container insights ?
Kubernetes and not helm will restart a failed container by default unless you set restartPolicy: Never in pod spec
Restarting of container is exactly same as starting it out first time. Hence in restart you can expect things to happen same way as it would when starting the container for first time.
Internally kubelet agent running in each kubernetes node delegates the task of starting a container to OCI complaint container runtime such as docker, containerd etc which then spins up the docker image as a container on the node.
I would expect entrypoint script to be executed in both start a restart of a container.
Does it deploy new docker image if pod fails/restarts ?
It creates a new container with same image as specified in the pod spec.
Does 'job' & 'cronjob' execute if container restarts ?
If a container which is part of cronjob fails kubernetes will keep restarting(unless restartPolicy: Never in pod spec) the container till the time job is not considered as failed .Check this for how to make a cronjob not restart a container on failure. You can specify backoffLimit to control number of times it will retry before the job is considered failed.
Scaling up is equivalent of scheduling and starting yet another instance of the same container on the same or altogether different Kubernetes node.
As a side note you should use higher level abstraction such as deployment instead of pod because when a pod fails Kubernetes tries to restart it on same node but when a deployment fails Kubernetes will try to restart it in other nodes as well if it's not able to start the pod on it's current scheduled node.

Adding more disk space to Kubernetes Jenkins slave

I am using the Kubernetes Jenkins plugin on an external master. The default disk size is limited to 10GB. Adding a pvc with the name jenkins-workspace, mounts the disk, but it is created with root user 0755 and doesn't allow jenkins user any access.
- Jenkins 2.104 master
- jenkinsci/jnlp slave
- kubernetes 1.7.4
- rhel 7.4
We have a customized jnlp slave, but I have even tried using the default that the plugin pulls in.
Can anybody point me to documentation or related article that shows how to add privileges for the pvc mount or dynamically add space after provisioning.
Our Jenkins master uses the Kubernetes cloud connection from the Configure System with a Kubernetes pod template either pointing to the default jnlp or using a container template to point to our customized jnlp slave in our local registry.
Cheers, Appreciate the help in advance
You can not set the permissions of the mounted volumes (fsGroup) nor the size of the PVC today. It is not implemented in https://github.com/jenkinsci/kubernetes-plugin/blob/master/src/main/java/org/csanchez/jenkins/plugins/kubernetes/volumes/PersistentVolumeClaim.java so it just uses the cluster defaults (
It will be possible using YAML once https://github.com/jenkinsci/kubernetes-plugin/pull/275 is implemented.
It is possible to change the permission by using an init container, while defining the slave pod you can include one init container that mount the volume as root and change the permission to jenkins user.

ImagePullBackOff after deploy to OpenShift

I'm starting with Docker and OpenShift v3.
I have a simple Node.js project and a Dockerfile basically copied from nodejs.org that runs perfectly fine on my local machine with docker run. I pushed my image to Docker Hub and then created my project via oc new-project.
After oc new-app and oc get pods, I see a pod with status ImagePullBackOff and another as Running. After sometime, only one pod lasts, with status Error. oc logs only brings me: pods for deployment took longer than 600 seconds to become ready.
Another thing that probably could help is that, after the oc new-app command, I got a message like * [WARNING] Image runs as the 'root' user which may not be permitted by your cluster administrator.
Am I doing something wrong or missing something? Is more info needed?
You can see my Docker file in here and my project's code in here.
By default OpenShift will prevent you from running containers as root due to the security risk. Whether you can configure a cluster to allow you to run a specific container as root will depend on what privileges you have to administer the cluster.
You are better off not running your container as root in the first place. To do that suggest you use the image at:
https://hub.docker.com/r/ryanj/centos7-s2i-nodejs/
This image is Source-to-Image (S2I) enabled and so integrates with OpenShift's build system.

Resources