Kill Jenkins Slave when the build process is done? - jenkins

I am using ec2 plugin in Jenkins to bring up slaves . I have configured in such a way that the Slaves will die if they are idle for 30 min.
Due to various reasons I want the Slaves to be killed as soon as the build process is completed .
Is there a way to do this ? If so can it be done through pipeline script.

Related

How to scale Jenkins slaves according to Build Queue on Kubernetes Plugin

I'm running on the Jenkins pipeline, so I have set the value to Time in minutes to retain agent when idle in slave configuration after I have run my jobs my slave but when the other job needs to run if a slave agent is full, jobs are waiting on Build Queue.
How I can scale the my slave count on Jenkins with Kubernetes plugin?
You're looking for the Container Cap setting, if you're using the Jenkins Configuration as Code (JCasC) plugin this is configured via the containerCapStr key.

Fault tolerant Jenkins on DCOS

I am running a Jenkins server on DCOS as documented here https://docs.mesosphere.com/1.7/usage/tutorials/jenkins/.
The Jenkins server is able to spawn new mesos slaves when new jobs are scheduled and kill them when the job is completed.
But if a cluster node crashes, having a Jenkins job running on it, Jenkins server doesn't re-run the job on other available nodes.
Is the Jenkins service on DCOS fault tolerant?
Can we re-run the job(on some other available node) that failed due to cluster node crashed in between execution of the job?
Jenkins itself does not rerun jobs that disappear. It is not specific to DC/OS or Mesos, it's just the way Jenkins works.
DC/OS and Mesos will make sure that Jenkins stays running and available to send jobs to, and in this way, it is "fault tolerant", but in the way you are asking about it isn't.

jenkins on demand slaves windows

The on-demand slaves are being created successfully from Jenkins. The first build on the slave is successful but the subsequent builds are fails. The restart of the slave or restart of the wimrc services allows the build to proceed again.
The tcpdump shows no errors. Can't figure out what the issue is. Looks like issues Jenkins communicating with on demand slaves using wimrc.
Has anybody faced similar issue?
The on-demand slaves are windows slave
The issue was with the "MaxMemoryPerShellMB" parameter of the Winrm. This was set to too low. So when the npm or git was doing a checkout it was running out this memory in the Winrm shell.
I have increased to 1GB and its working fine now

Abort Jenkins job if any interruption occur

We are using webhooks to trigger a Jenkins jobs when merge request created in Gitlab project. If any interruption occurs when a Jenkins job is running, should abort the job.
Consider the following cases too to abort the Jenkins job.
Slave disconnected from Jenkins master machine
Jenkins server restarted
Is there any plugin available to abort the job on interruption?
When a slave is disconnected from server , build will fail as no executor will be available for the job to continue further , possible error will "NO JDK found "
When jenkins is restarted it will stop all jobs in execution and queue.
AFAIK there is no perfect matched plugin for your requirement , reason being Java only allows threads to be interrupted at a set of fixed locations, depending on how a build hangs, the abort operation might not take effect.
But you can try some rest api's https://wiki.jenkins.io/display/JENKINS/Aborting+a+build
Or abort a job on some conditions :
https://wiki.jenkins.io/display/JENKINS/Build-timeout+Plugin
Some more ref :
How to stop an unstoppable zombie job on Jenkins without restarting the server?

Is it possible to make Jenkins create workers from attached clouds faster?

I have an instance of Jenkins that uses the mesos plugin. Nearly all of my jobs get triggered via Mesos tasks. I would like to make worker generation a bit more aggressive.
The current issue is that, for the mesos plugin, I have all of the jobs marking the mesos tasks as one-time usage slaves and when a build is in progress on one of these slaves Jenkins forces any queued jobs to wait for a potential executor on these slaves, as opposed to spinning up new instances.
Based on the logs, it also seems like Jenkins has a timer that periodically checks to see if any slaves should be spun up based on the # of queued jobs / excess workload. Is it possible to decrease the polling interval for that process?
From Mesos Jenkins Plugin Readme: over provisioning flags
By default, Jenkins spawns slaves conservatively. Say, if there are 2 builds in queue, it won't spawn 2 executors immediately. It will spawn one executor and wait for sometime for the first executor to be freed before deciding to spawn the second executor. Jenkins makes sure every executor it spawns is utilized to the maximum. If you want to override this behavior and spawn an executor for each build in queue immediately without waiting, you can use these flags during Jenkins startup:
-Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85

Resources