How to terminate child process started from Jenkins job? - jenkins

I have created a Jenkins job which executes a jar. This jar starts another process from within it. On aborting[X] the Jenkins jobs from UI it is not killing the processes started on the slave machines, whereas on console it shows Aborted. Please help!

You can simply delete the job completely:
http://MY_SERVER/job/JOB_NAME/doDelete
Keep in mind that this will completely remove the current build and you will no longer have any access to it.

Related

Jenkins build can't finish

For my build via Groovy script (without using Docker or maven or k8s, just call another sh file), when triggered by timer, sometimes, it cannot finish with issue:
process apparently never started in jenkinsWs/myjob#tmp/durable-d6c09021
(running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
Cannot contact Slave29: java.io.FileNotFoundException: File 'jenkinsWs/myjob#tmp/durable-d6c09021/output.txt' does not exist
But it always works when I run it manually. I've used monitor tools for my slave machine to track what happened, but nothing strange.
I've tried all solution from this but nohope.
Thanks in advance for any idea.

Update jenkins during long running jobs

I want to upgrade my jenkins master without aborting or waiting for long running jobs to finish on slaves. Is there a plugin available that provides this feature?
We have several build jobs running regression and integration tests which take hours to run. Often, at least one of those jobs is running, making it hard to restart jenkins after updates. I know, that it is poosible to block the queue. We tried this, but it hinders more than it helps.
What we are looking for is a plugin, that runs jobs on slaves, caches the output as soon as the connection to the master is interrupted and sends the remaining output to the master when the master is up again. Does anybody know a plugin providing this feature.

Jenkins job is not stopping after completion of execution

I am running Jenkins using maven and once the job is completed it is not terminating until we terminate it manually.In console output able to see the results but not showing build success and showing the processing symbol/loading symbol. can any one tell me how to stop the job after job execution.
Do we need to terminate manually or
do we have need to add anything in post build to stop after successful execution?
do we have to set something in configuration to terminate automatically
please can anyone help me out?
A few things could be going on here:
Does the Maven build execute any other goals after running tests ? If so those could be hanging.
Does your build run on a slave ? If so, Jenkins copies log files and other artifacts back to the master after completing the build steps but before marking the build as complete. You may have a network or I/O bottleneck here.
If you can't figure out the root cause and just want to have the build terminate without intervention, you can use the build timeout plugin.
If you have jobs running in parallell then some plugins have to wait for the older jobs to finish until the current one can. Not sure if this is your situation.
After using webdriver.quit() in our selenium project, it's working fine.Job got completed and the reports were generated.

One execution per Windows VMware VM as Jenkins slaves?

I am trying to run some automated acceptance tests on a windows VM but am running into some problems.
Here is what I want, a job which runs on a freshly reverted VM all the time. This job will get an MSI installer from an upstream job, install it, and then run some automated tests on it, in this case using robotframework (but that doesn't really matter in this case)
I have setup the slave in the vSphere plugin to only have one executor and to disconnect after one execution. On disconnect is shutsdown and reverts. My hope was this meant that it would run one Jenkins job and then revert, the next job would get a fresh snapshot, and so would the next and so on.
The problem is if a job is in queue waiting for the VM slave, as soon as the first job finishes the next one starts, before the VM has shutdown and reverted. The signal to shutdown and revert has however been sent, so the next job is almost immedieatly failed as the VM shuts down.
Everything works fine as long as jobs needing the VM aren't queued while another is running, but if they are I run into this problem.
Can anyone suggest a way to fix this?
Am I better off using vSphere build steps rather than setting up a build slave in this fashion, if so how exactly do I go about getting the same workflow to work using buildsteps and (i assume) pipelined builds.
Thanks
You can set a 'Quiet period' - it's in Advanced Project Options when you create a build. You should set it at the parent job, and this is the time to wait before executing the dependent job
If you'll increase the wait time, the server will go down before the second job starts...
Turns out the version of the vSphere plugin I was using was outdated, this bug problem is fixed in the newer version

Jenkins kill all child processes

I have a jenkins job that runs a bash script.
In the bash script I perform effectively two actions, something like
java ApplicationA &
PID_A=$!
java ApplicationB
kill $PID_A
but if the job is manually aborted, the ApplicationA remains alive (as can be seen with a ps -ef on the node machine). I cannot use trapping and so on, because that won't work if jenkins sends a 9 signal (trapping doesn't work for 9).
It would be ideal if this job could be configured to simply kill all processes that it spawns, how can I do that?
Actually, by default, Jenkins has a feature called ProcessTreeKiller which is responsible to make sure there are no processes left running after the job execution.
The link above explain how to disable that feature. Are you sure you don't have that disabled by mistake somehow?
Edit:
Following the comments by the author, based on the information about disabling ProcessTreeKiller, to achieve the inverse one must set the environment variable BUILD_ID to the build id of Jenkins job. This way, when ProcessTreeKiller looks through the running processes to kill, it will find this as well
export BUILD_ID=$BUILD_ID
You can also use the Build Result Trigger plugin, configure a second job to clean up your applications, and configure it to monitor the first job for ABORTED state as a trigger.

Resources