jenkins kubernetes-plugin slaveConnectTimeout not honoured - jenkins

i am running jenkins 2.103 in docker and have connected it to a kubernetes on arm cluster.
i have been able to manually connect the jnlp (v3.16) slave to the master, however it appears to take around 15mins for it to fully connect and report as online. Once online I can run builds as expected.
The problem is that it appears the 'slaveConnectTimeout' setting in the podTemplate is not honoured in the pipeline configuration, and neither is the default template setting of 'Timeout in seconds for Jenkins connection' in Pod Template section of Global Settings.
has anyone be able to make this setting work, and, does anyone have any idea what could be causing the 15min delay in registration?
this issue has been raised as a bug JENKINS-49281 now as well.

the issue ended up being openjdk and me not fully understanding what the kubernetes timeout is all about.
the delay in agent registration is not just a jenkins issue, i have seen the same behaviour in gocd and other java based apps. platform issue, not app issue

Related

retry jenkins kubernetes agent connection

I am using kubernetes plugin in Jenkins pipelines to create agents in kubernetes. I am able to launch, connect and do builds on the agents. However, when the agent pod doesn't have enough capacity, the agent bringup fails immediately with "forbidden: exceeded quota" error. My question is, is there a way to retry 'n' number of times with sleep time inbetween to bringup the agent as other builds running on kubernetes can finish and free up resources.
Thanks,
GD
the kubernetes plugin version i was using is 1.27.7 and apparently this is a known bug in that version ( https://issues.jenkins.io/browse/JENKINS-63976 ). the bug seems to be fixed on kubernetes plugin version 1.28.6.

Jenkins not able to allow to save the configuration

I am running jenkins multi branch job, suddenly it not allow me to change the configuration changes, its keep on loading without any timeout issue.
Can you please some one help me on this ?
You could have a look at the Jenkins master machine CPU and memory. Look what is consuming them. I have seen this happening when the CPU is nearly 100 %. In this case, restarting the Jenkins process or Jenkins master machine could help.
Try to remember/ask colleagues if there are any recent changes to Jenkins master machine. We had similar issues after installing plugins.
Avoid executing jobs on Jenkins master, use slave agents.
You may need to clean up old builds if you are not doing this already.
in my case, after disabling / enabling all plugins one by one, it was the "AWS SQS Build Trigger Plugin", causing the "save / apply" buttons to move, and not be functional

jenkins on demand slaves windows

The on-demand slaves are being created successfully from Jenkins. The first build on the slave is successful but the subsequent builds are fails. The restart of the slave or restart of the wimrc services allows the build to proceed again.
The tcpdump shows no errors. Can't figure out what the issue is. Looks like issues Jenkins communicating with on demand slaves using wimrc.
Has anybody faced similar issue?
The on-demand slaves are windows slave
The issue was with the "MaxMemoryPerShellMB" parameter of the Winrm. This was set to too low. So when the npm or git was doing a checkout it was running out this memory in the Winrm shell.
I have increased to 1GB and its working fine now

Jenkins docker-plugin - Job does not start (waiting for executor)

I'm trying not (not hard enough it seems) to get our jenkins server to provision a jenkins-slave using docker.
I have installed the Docker-plugin and configured it according to the description on the page. I have also tested the connectivity and at least this part works.
I have also configured 1 label in the plugin and in my job. I even get a nice page showing me the connected jobs for this slave.
When I then try to start a build nothing really happens. A build is scheduled, but never started - (pending—Waiting for next available executor).
From the message it would seem like jenkins is not able to start the slave via docker....
I'm using docker 1.6.2 and the plugin is 0.10.1.
Any clue to what is going on would be much appreciated!
It seems the problem was that I had added the docker version in the plugin config. That is apparently a no-go according to this post

Jenkins Jobs are not to be seen after network issue

Today morning, we noticed all Putty Jobs running Jenkins were closed due to Network Issue. Once network was up, we re-started Jenkins and we observed that Jenkins Dashboard was not showing ANY jobs. We had around 80 Jobs on the dash board. We are using VM servers for Master/Slave setup. Config.xml is fine. What do we do? how do we get back on track?
All the jenkins jobs are basically xml config files kept in jenkins home.
If your Jenkins is not showing these jobs then it is not using same home directory.
Kindly check jenkins process to see which directory it is pointing to.

Resources