Is there a way to check Jenkins reboot history - jenkins

I was using the Restart Safely option in jenkins to restart the master after some running jobs completed.
However jenkins was unattended after that and now it's up and running. Is there a way to check when the master restarted in the form of jenkins reboot history (or something like ps -ef equivalent would also work in that I can see the process start time to get details of the restart).

It will be in /var/log/jenkins/jenkins.log.log.
Check for Jenkins is fully up and running you should also see messages about the scheduled reboot.

Related

How to reboot Jenkins node using shell in Groovy

I am writing a Groovy script to perform an automatic reboot of Windows servers. In the script, I am first taking the nodes offline, then checking to see if there are any builds, if there aren't, then perform a restart.
I wanted to use the safeRestart() method but it doesn't support the import statement I am using when looping through the nodes. I have seen an execute() method which basically executes a shell line of code in groovy.
How would I execute a restart of the Windows computers using execute()?
Not sure if this will answer your question directly, but will point in the right direction ...
You can leverage this S/O question: Run a remote command on all Jenkins slaves via Masters's script console or this Gist: run_command_on_all_slaves.groovy
btw: Jenkins API does seem to support running a script directly on the server (Computer).
Your actual command should be shutdown /r`
I don't believe you can do this unless the Node is on-line. Disconnecting the node stops the Jenkins slave process, then there's nothing running on the node, so not sure what control you'd have. Instead you want to block the queue and let the existing jobs finish:
Jenkins.instance.getNode('Node-Name').toComputer().setAcceptingTasks(false)
and check:
Jenkins.instance.getNode('Node-Name').toComputer().countBusy() == 0
Then run your (work on server) restart command
When the server is available again, launch the node and open the queue.
Jenkins.instance.getNode('Node-Name').getComputer().launch()
Jenkins.instance.getNode('Node-Name').getComputer().setAcceptingTasks(true)
Hope that helps.

One execution per Windows VMware VM as Jenkins slaves?

I am trying to run some automated acceptance tests on a windows VM but am running into some problems.
Here is what I want, a job which runs on a freshly reverted VM all the time. This job will get an MSI installer from an upstream job, install it, and then run some automated tests on it, in this case using robotframework (but that doesn't really matter in this case)
I have setup the slave in the vSphere plugin to only have one executor and to disconnect after one execution. On disconnect is shutsdown and reverts. My hope was this meant that it would run one Jenkins job and then revert, the next job would get a fresh snapshot, and so would the next and so on.
The problem is if a job is in queue waiting for the VM slave, as soon as the first job finishes the next one starts, before the VM has shutdown and reverted. The signal to shutdown and revert has however been sent, so the next job is almost immedieatly failed as the VM shuts down.
Everything works fine as long as jobs needing the VM aren't queued while another is running, but if they are I run into this problem.
Can anyone suggest a way to fix this?
Am I better off using vSphere build steps rather than setting up a build slave in this fashion, if so how exactly do I go about getting the same workflow to work using buildsteps and (i assume) pipelined builds.
Thanks
You can set a 'Quiet period' - it's in Advanced Project Options when you create a build. You should set it at the parent job, and this is the time to wait before executing the dependent job
If you'll increase the wait time, the server will go down before the second job starts...
Turns out the version of the vSphere plugin I was using was outdated, this bug problem is fixed in the newer version

Jenkins kill all child processes

I have a jenkins job that runs a bash script.
In the bash script I perform effectively two actions, something like
java ApplicationA &
PID_A=$!
java ApplicationB
kill $PID_A
but if the job is manually aborted, the ApplicationA remains alive (as can be seen with a ps -ef on the node machine). I cannot use trapping and so on, because that won't work if jenkins sends a 9 signal (trapping doesn't work for 9).
It would be ideal if this job could be configured to simply kill all processes that it spawns, how can I do that?
Actually, by default, Jenkins has a feature called ProcessTreeKiller which is responsible to make sure there are no processes left running after the job execution.
The link above explain how to disable that feature. Are you sure you don't have that disabled by mistake somehow?
Edit:
Following the comments by the author, based on the information about disabling ProcessTreeKiller, to achieve the inverse one must set the environment variable BUILD_ID to the build id of Jenkins job. This way, when ProcessTreeKiller looks through the running processes to kill, it will find this as well
export BUILD_ID=$BUILD_ID
You can also use the Build Result Trigger plugin, configure a second job to clean up your applications, and configure it to monitor the first job for ABORTED state as a trigger.

How to run jenkins job as a system user?

Rightnow, my jenkins jobs are run by Tomcat Server user.
I wanted it to run as User 'Admin', so i tried creating a slave and
added my same jenkins machine as the slave.
I have also added this as a windows service, and have confiured the
Admin user/pwd in the Logon Tab.
But still, when i run a job which executes the UI tests, i'm not able
to see them running in the firefox but it runs and the screenshots
are captured!
Are you asking how to have Jenkins spawn a process in your session that you can see at the monitor?
Have a look here: Open Excel on Jenkins CI, replace excel with whatever you are launching.
If you use jenkins as windows service, it won't allow GUI execution.
It only allows backgound running jobs.
If you want run UI test then stop your jenkins service , use some other way to connect your slave.

Vagrant aborted at end of Jenkins job

I've been having this problem for a while. Vagrant boxes abort at the end of a Jenkins job. I've limited the job to just a script with
vagrant up
sleep 60
For 60 seconds vagrant boxes are running, but the second the job finishes vagrant boxes are aborted.
This behaviour is caused by the Jenkins process tree killer. I got it to work by running Jenkins as follows:
java -Dhudson.util.ProcessTree.disable=true -jar jenkins-1.537.war
Another (less global) work-around is to run vagrant as follows:
BUILD_ID=dontKillMe vagrant up
Makes sense in retrospect. Processes launched by a Jenkins job should be cleaned up at the end. Of course this would be a "gotcha" is you're attempting to use Jenkins to launch long running processes.
+1 for this question.
Maybe you are using an older version of the Jenkins plugin, but now it contains a checkbox called 'Don't Kill Me'. You have to check this to keep the vm up.
'jenkins config'

Resources