I am working on AWS EKS with Jenkins terraform, I am facing an error while destroying the resources it seems that Jenkins pipeline destroys the EKS cluster first so that the node group makes an error in the pipeline, EKS node group should be deleted first then the cluster must be deleted. If I try to run terraform script locally it is working fine.
Error: error deleting EKS Cluster (EKS_CLUSTER): ResourceInUseException: Cluster has nodegroups attached
I am expecting someone is facing the same issue and give an answer which resolves my problem
Related
I am using Digital Ocean Kubernetes cluster as well as Jenkins Helm Chart running inside this cluster. Digital Ocean using containerd://1.4.13 as container runtime and to run containerized tests inside Jenkins Pipeline, I need to download sources from the repository, build a Dockerfile with unit tests and docker-compose.yml with integration tests (in addition to the application there is also a database) and run it all. The problem here is that I can't throw /run/containerd/containerd.sock from the node that runs this pod into the pod itself, because I didn't find any functionality in kubernetes similar to throwing a specific file inside a container in docker. How can I solve this problem by leaving containerized tests? Thank you in advance!
I have inherited a Jenkins installation which is used by multiple remote teams, and running on an Amazon EKS cluster. For reasons that are not really relevant, I need to move this Jenkins workload to a new EKS cluster.
Deploying jenkins itself is not a major issue, I am doing so using helm. The persistence of the existing jenkins deployment is bound to an Amazon EBS volume, the persistence of the new deployment will also be. The mount point will be /var/jenkins_home
I'm trying to find a simple way of migrating everything from the current jenkins installation and configuration to the new one. This includes mainly:
Authorization Strategy (RBAC)
Jobs
Plugins
Cloud and Agent Config
I know that everything required is most likely in Jenkins Home. Could I in theory just dump out the current Jenkins Home folder and import into the new running jenkins container using kubetl cp or something like that? Is there an easier way? Is this unsafe in some way?
I am trying to install Jenkins in my kubernetes cluster. When I am exploring I found that in two ways.
The first way that I understood is that, To install Jenkins master and slave. Here I found documentation for installing Jenkins master and slave agent on top of my kubernetes cluster.
The second way that I found that usage of Kubernetes plugin for Jenkins.If we using this way , Installing only master and configuring the plugin. And Slave pod will automatically working when one deployment is creating.
Confusion
Here my confusion is that,
In first method Do we need to define the worker machine for installing both master and slave ?
In second method, Is this proper way of installing Jenkins , Since we only installing master and configuring the plugin to use the Jenkins slave agent? Is this standard way of using Jenkins in top of kubernetes cluster?
Can anyoen give clarification for my confusions please?
I found this tutorial to be pretty helpful in getting jenkins running on my kubernetes cluster: https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes
It relies on the jenkins kubernetes plugin you mentioned. And if Google is doing it this way, its probably pretty safe to assume it is a valid method. It is the method I use on my cluster, where the jenkins master can provision slave pods as needed, which makes much more sense than keeping slaves alive that aren't being used.
I am setting up Dynamic Jenkins slaves provision in Kubernetes.
The default jenkins/jnlp-slave:alpine works fine but I see this below error in Kubernetes Agent:
W0129 19:09:42.310410 26799 kuberuntime_container.go:191] Non-root verification doesn't support non-numeric user (jenkins)
The job is just to check the environment variables and it runs fine & gives a proper output. But why do we have this error message?
It seems an authentication problem.
Take a look at these following:
https://github.com/kubernetes/kubernetes/issues/45487
Jenkins kubernetes plugin not working
Hope this helps.
I am trying to create a Jenkins pipeline on Openshift, that automatically runs a Jenkins service when we start the pipeline build. I referred few templates online and created a Jenkins pod and a pipeline. But whenever I try to run the pipeline, It shows build status as "not started.
Later, I created a standalone Jenkins service in Openshift, created a Jenkins file in open shift and tried executing it. I encountered authentication issues while connecting with Openshift from Jenkins.
Can anyone guide me, if I am missing something or any other working templates for a pipeline?
Thanks
It’s because of permissions
Jenkins runs with Jenkins user and openshift doesn’t know how to connect to it
Create a new service account in openshift jenkins