Jenkins build can't finish - jenkins

For my build via Groovy script (without using Docker or maven or k8s, just call another sh file), when triggered by timer, sometimes, it cannot finish with issue:
process apparently never started in jenkinsWs/myjob#tmp/durable-d6c09021
(running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
Cannot contact Slave29: java.io.FileNotFoundException: File 'jenkinsWs/myjob#tmp/durable-d6c09021/output.txt' does not exist
But it always works when I run it manually. I've used monitor tools for my slave machine to track what happened, but nothing strange.
I've tried all solution from this but nohope.
Thanks in advance for any idea.

Related

I have a Batch Script "devscript.cmd" (on the same Pc as Jenkins,) that I want executed in a jenkins Job. How Do I do it in the best possible way

I have a job running perfectly in Jenkins called 'MyBuild'. The Source Code is from my Azure Repo and builds everything and even creates Artifacts for me. Perfect up to that point.
The Files that are generated by this 'MyBuild' job come with a default config from the azure repo. So I have a batch script called 'devscript.cmd' on the same PC/server where Jenkins is running. This script makes a few changes to the configs files in the last successful Build. After making those changes it should begin a dotnet run on my application.
So far What I tried is: created another freestyle Job in Jenkins called 'Development_Run'. I left all the configs as default and added the build steps of Execute Windows Batch Commands.
So the Job runs well since it just navigates to that path and Executes the script.
The Problem is. All what the Script is supposed to Do, doesn't happen. NB:// I test the script in the PC first and it runs well before starting the build process in the 'Development_Run' Job. So the Script is executed but in actual sense the contents of the script don't get executed.
Illustration below
Where I could I be going wrong? Or someone help by giving an alternative so as to achieve what I am doing.
The reason for all this is automation. I could do this manually every time but since the development is ongoing I need to have this automated.

how to reboot Jenkins slave in pipeline without job failed

Here is the thing, I have got a program that may get stuck sometimes, and when it happens I need to reboot my machine.
So, I want to reboot my Jenkins slave when the program gets stuck then continue to execute the rest of my program without marking the whole job as failed.
Can Anyone tell me how to do that?
Actually I wanted to add this as a comment but I don't have enough reputation.
You may want to use Restart from stage feature as documented here

How to reboot Jenkins node using shell in Groovy

I am writing a Groovy script to perform an automatic reboot of Windows servers. In the script, I am first taking the nodes offline, then checking to see if there are any builds, if there aren't, then perform a restart.
I wanted to use the safeRestart() method but it doesn't support the import statement I am using when looping through the nodes. I have seen an execute() method which basically executes a shell line of code in groovy.
How would I execute a restart of the Windows computers using execute()?
Not sure if this will answer your question directly, but will point in the right direction ...
You can leverage this S/O question: Run a remote command on all Jenkins slaves via Masters's script console or this Gist: run_command_on_all_slaves.groovy
btw: Jenkins API does seem to support running a script directly on the server (Computer).
Your actual command should be shutdown /r`
I don't believe you can do this unless the Node is on-line. Disconnecting the node stops the Jenkins slave process, then there's nothing running on the node, so not sure what control you'd have. Instead you want to block the queue and let the existing jobs finish:
Jenkins.instance.getNode('Node-Name').toComputer().setAcceptingTasks(false)
and check:
Jenkins.instance.getNode('Node-Name').toComputer().countBusy() == 0
Then run your (work on server) restart command
When the server is available again, launch the node and open the queue.
Jenkins.instance.getNode('Node-Name').getComputer().launch()
Jenkins.instance.getNode('Node-Name').getComputer().setAcceptingTasks(true)
Hope that helps.

One execution per Windows VMware VM as Jenkins slaves?

I am trying to run some automated acceptance tests on a windows VM but am running into some problems.
Here is what I want, a job which runs on a freshly reverted VM all the time. This job will get an MSI installer from an upstream job, install it, and then run some automated tests on it, in this case using robotframework (but that doesn't really matter in this case)
I have setup the slave in the vSphere plugin to only have one executor and to disconnect after one execution. On disconnect is shutsdown and reverts. My hope was this meant that it would run one Jenkins job and then revert, the next job would get a fresh snapshot, and so would the next and so on.
The problem is if a job is in queue waiting for the VM slave, as soon as the first job finishes the next one starts, before the VM has shutdown and reverted. The signal to shutdown and revert has however been sent, so the next job is almost immedieatly failed as the VM shuts down.
Everything works fine as long as jobs needing the VM aren't queued while another is running, but if they are I run into this problem.
Can anyone suggest a way to fix this?
Am I better off using vSphere build steps rather than setting up a build slave in this fashion, if so how exactly do I go about getting the same workflow to work using buildsteps and (i assume) pipelined builds.
Thanks
You can set a 'Quiet period' - it's in Advanced Project Options when you create a build. You should set it at the parent job, and this is the time to wait before executing the dependent job
If you'll increase the wait time, the server will go down before the second job starts...
Turns out the version of the vSphere plugin I was using was outdated, this bug problem is fixed in the newer version

Jenkins slaves go offline or hang when archiving artifacts

In the job post build action, am archiving the artifacts. 90% of the time, when the jenkins job reaches this step, the slave on which it is running hangs (or) goes offline (or) the job hangs and if I kill the job it throws a "Caused by: java.lang.OutOfMemoryError: Java heap space" error.
Am running Jenkins ver 1.560.
Has anyone seen this or is aware of a fix for this? Any help is appreciated.
Thanks
It looks like you're running into https://issues.jenkins-ci.org/browse/JENKINS-22734 which started in version 1.560 and will be fixed in 1.563.
It's always a good idea to browse the Jenkins change log, especially the Community Ratings section, when you install a new version.
Whenever Hudson master will run out of space, slaves will disconnect and will have to be restarted.
You need to check Hudson master box and see how much space is allocated to the drive where hudson is running.
Another thing to note is that even if a job is running on slave, artifacts are archived always on master. So space allocation on master should be done properly.
I ran into this issue with 1.560v of Jenkins. Right now I have disabled the archiving of the maven artifacts from the "Build" section.

Resources