How to reboot Jenkins node using shell in Groovy - jenkins

I am writing a Groovy script to perform an automatic reboot of Windows servers. In the script, I am first taking the nodes offline, then checking to see if there are any builds, if there aren't, then perform a restart.
I wanted to use the safeRestart() method but it doesn't support the import statement I am using when looping through the nodes. I have seen an execute() method which basically executes a shell line of code in groovy.
How would I execute a restart of the Windows computers using execute()?

Not sure if this will answer your question directly, but will point in the right direction ...
You can leverage this S/O question: Run a remote command on all Jenkins slaves via Masters's script console or this Gist: run_command_on_all_slaves.groovy
btw: Jenkins API does seem to support running a script directly on the server (Computer).
Your actual command should be shutdown /r`
I don't believe you can do this unless the Node is on-line. Disconnecting the node stops the Jenkins slave process, then there's nothing running on the node, so not sure what control you'd have. Instead you want to block the queue and let the existing jobs finish:
Jenkins.instance.getNode('Node-Name').toComputer().setAcceptingTasks(false)
and check:
Jenkins.instance.getNode('Node-Name').toComputer().countBusy() == 0
Then run your (work on server) restart command
When the server is available again, launch the node and open the queue.
Jenkins.instance.getNode('Node-Name').getComputer().launch()
Jenkins.instance.getNode('Node-Name').getComputer().setAcceptingTasks(true)
Hope that helps.

Related

UI issues while executing UFT test via Jenkins

I am having Jenkins running as a service and have a job to execute UFT tests on a remote slave. As part of the pipeline I am required to un-install our product, restart the slave, install the product (latest version) and start the test execution.
Since UFT tests need a dedicated UI, I am trying to launch a mstsc connection to the test VM from a temp VM. But since Jenkins is running as a service the mstsc process runs as a background process on the temp VM. Due to this UFT tests don't get a dedicated UI and some of the tests fail.
Tried running Jenkins using the war file instead of service. But after 30-40 mins or so the master slave connection drops.
Any workaround / tweak would be appreciated.
you need to run your jenkins remote agent(war) as a normal Process and not as a service, otherwise, as you mentioned there is no Desktop for them.
My Proposal:
Make sure the jenkins remote agent is running as a normal OS process (on both VMs). You can have a Windows Scheduled Task that launches this Process on Logon and Checks every 5 minutes if it is still alive (if not restarts it)
After the Temporary VM (Let's call it a Gateway) woke up your Test VM, the Test VM should execute a tscon command which will redirect the currently active RDP Session to the console (the Physical Monitor - which on Virtual machines well it's virtual). This will help you having your UI Session alive until the next restart, without having to bother about the Gateway
tscon here. Example: tscon rdp-tcp#1 /dest:console This can be solved again with a Scheduled Task which is executed At Logon (waiting a few Seconds just to make sure)
Have Caffeine.exe or MouseJiggle.exe running in the background as Processes (also launched at Logon) on your Test Computers to make sure the SCreen is never Locked or any Screen Saver is activated. Both tools are free.
If your Jenkins Connection drops that is a different issue has nothing to do with UFT. In my case this combination works perfectly fine. It is also easy to automate the installation of these things. Windows Batch and Vbs can do all these things for you. (Putting the mentioned tools to your %PATH% and creating Scheduled Tasks Programmatically)
** Bonus Tipp: In order to avoid a taskkill java.exe command killing your remote agent, you can simply rename the java.exe of your jvm to jenkins_remote_agent.exe and use that as your jenkins remote agent executable
UFT requires an interactive session for some Win32 operations.
In the Tools ⇨ Options menu, select General ⇨ Run Sessions there you will find an option to Enable continued testing on locked/disconnected remote computers, this may help in your case too.

Run executable after jenkins pipeline build

I have a console application that needs to be permanently running on the same machine on which I run Jenkins. After I build and publish the .exe file I need to run it but if I try to use bat "pathtofile\\filename.exe", the pipeline will wait for the process to finish, but it never will because the process is a socket server which keeps running and listening.
Is there a way to Run a Fire-and-Forget command to start the .exe?
Have you considered creating a Windows Service? If this is not an option, I would suggest using the /B flag of the start command. For example,
start /B yourapp.exe
will execute your app in the background, somewhat similar to Linux's yourapp & command form.
To check out all the options of the start command, you can type help start in a Command Line Window.
Jenkins gives guidance and examples on this for freestyle projects (but the principle should be the same for pipeline), see:
https://wiki.jenkins.io/display/JENKINS/Spawning+processes+from+build
For Windows, there are a number of options - using the at command (runs a job at specific time); creating a wrapper script to run your bat file; or creating a scheduled task (thus similar to the at command). I've used the schedule task approach (again for freestyle builds rather than pipeline).

Can I use Jenkins Slaves for automated testing on different operating systems?

I am setting up a CI workflow using Jenkins. I have various code bases that I would like to be able to test on different operating systems from Windows Server 2012 through 2003 and also Red Hat, etc.
I'm wondering if using Jenkins slaves would be an effective solution for this.
Specific questions are things like:
If a master executes a project, where is the project defined vs where does the job execute?
If I want to execute a job that tests a language I don't want to support on the masters operating system (think Ruby on Windows), do I still need to make the master aware of that language in order to define the job, say by installing the relevant plugin?
If I define a slave that's running inside a VM and I stop the VM, when the VM comes back up, am I going to have to run some sort of start up task on the slave, or pre-execute task on the master, to re register the slave before I can start a project running on the slave?
When the slave task completes and the results are reported back, are those results stored on the master such that I can shut down the slave and still have access to previous test run results and trending information?
Thanks in advance for any advice.
If a master executes a project, where is the project defined vs where does the job execute?
The jobs are defined and stored on the Master, they are executed on the Slave machines. You can define which jobs get executed on which slaves by using labels.
If I want to execute a job that tests a language I don't want to support on the masters operating system (think Ruby on Windows), do I still need to make the master aware of that language in order to define the job, say by installing the relevant plugin?
The Master doesn't need to know about the build environment. If you set up the Slave with the proper build environment, that should be fine. The master just delegates the jobs and such.
If I define a slave that's running inside a VM and I stop the VM, when the VM comes back up, am I going to have to run some sort of start up task on the slave, or pre-execute task on the master, to re register the slave before I can start a project running on the slave?
It depends on how you are connecting the Slave to the Master. For example, if you connect a Windows machine with the launch method: "Let Jenkins control this Windows slave as a Windows service". It should reconnect automatically when the Slave is back online. There is some setup involved in getting this to work however.
When the slave task completes and the results are reported back, are those results stored on the master such that I can shut down the slave and still have access to previous test run results and trending information?
Console log are kept on the Master. That's probably what you want.
Hope that helps :)

One execution per Windows VMware VM as Jenkins slaves?

I am trying to run some automated acceptance tests on a windows VM but am running into some problems.
Here is what I want, a job which runs on a freshly reverted VM all the time. This job will get an MSI installer from an upstream job, install it, and then run some automated tests on it, in this case using robotframework (but that doesn't really matter in this case)
I have setup the slave in the vSphere plugin to only have one executor and to disconnect after one execution. On disconnect is shutsdown and reverts. My hope was this meant that it would run one Jenkins job and then revert, the next job would get a fresh snapshot, and so would the next and so on.
The problem is if a job is in queue waiting for the VM slave, as soon as the first job finishes the next one starts, before the VM has shutdown and reverted. The signal to shutdown and revert has however been sent, so the next job is almost immedieatly failed as the VM shuts down.
Everything works fine as long as jobs needing the VM aren't queued while another is running, but if they are I run into this problem.
Can anyone suggest a way to fix this?
Am I better off using vSphere build steps rather than setting up a build slave in this fashion, if so how exactly do I go about getting the same workflow to work using buildsteps and (i assume) pipelined builds.
Thanks
You can set a 'Quiet period' - it's in Advanced Project Options when you create a build. You should set it at the parent job, and this is the time to wait before executing the dependent job
If you'll increase the wait time, the server will go down before the second job starts...
Turns out the version of the vSphere plugin I was using was outdated, this bug problem is fixed in the newer version

Jenkins kill all child processes

I have a jenkins job that runs a bash script.
In the bash script I perform effectively two actions, something like
java ApplicationA &
PID_A=$!
java ApplicationB
kill $PID_A
but if the job is manually aborted, the ApplicationA remains alive (as can be seen with a ps -ef on the node machine). I cannot use trapping and so on, because that won't work if jenkins sends a 9 signal (trapping doesn't work for 9).
It would be ideal if this job could be configured to simply kill all processes that it spawns, how can I do that?
Actually, by default, Jenkins has a feature called ProcessTreeKiller which is responsible to make sure there are no processes left running after the job execution.
The link above explain how to disable that feature. Are you sure you don't have that disabled by mistake somehow?
Edit:
Following the comments by the author, based on the information about disabling ProcessTreeKiller, to achieve the inverse one must set the environment variable BUILD_ID to the build id of Jenkins job. This way, when ProcessTreeKiller looks through the running processes to kill, it will find this as well
export BUILD_ID=$BUILD_ID
You can also use the Build Result Trigger plugin, configure a second job to clean up your applications, and configure it to monitor the first job for ABORTED state as a trigger.

Resources