Octane-Jenkins execution node - jenkins

I am trying to run UFT tests from octane to jenkins and it generates a job which shows that execution node.
It is possible to modify that parameter somewhere, because it stays in the queue

Related

How to avoid scheduling/starting multiple runs of a Jenkins job at the same time

We are moving over our build system from Hudson to Jenkins and also to declarative pipelines in SCM. Alas, it looks like there are some hiccups. In Hudson, when a job was scheduled and waiting in the queue, no new jobs were scheduled for that project, which makes all the sense. In Jenkins, however, I observe there are e.g. 5 instances of a job started at the same time, triggered by various upstream or SCM change events. They have all even kind of started, one of them is actually running on the build node and the rest are waiting in "Waiting for next available executor on (build node)". When the build node becomes available, they all dutifully start running in turn and all dutifully run through, most of them without no purpose at all as there are no more changes, and this all takes a huge amount of time.
The declarative pipeline script in SCM starts with the node declaration:
pipeline {
agent {
label 'BuildWin6'
}
...
I guess the actual problem is that Jenkins starts to run these jobs even though the specified build node is busy. Maybe it thinks I might have changed the Jenkinsfile in the SCM and specified another build node to run the thing on? Anyway, how to avoid this? This is probably something obvious as googling does not reveal any similar complaints.
For the record, answering myself. It looks like the best solution is to define another trigger job which is triggered itself by SCM changes. It should do nothing else, only checks out the needed svn repos (with depthOption: 'empty' for space and speed). The job needs to be bound to run on the same agent than the main job.
The main job is triggered only by the first job, not by SCM changes. Now if the main job is building for an hour, and there are 10 svn commits during that time, Jenkins will schedule 10 trigger job builds to run. They are all waiting in the queue as the agent is busy. When the agent becomes available, they all run quickly through and trigger the main job. The main job is triggered only once, for that one must ensure its grace/quiet period is larger than the trigger job run time.

Stress testing jenkins master using jmeter

I am trying to stress test my jenkins infrastructure using jmeter. I have created a Jmeter TestPlan which uses HTTPRequest component of jmeter to trigger the jenkins builds using jenkins rest api. The idea is to trigger a large number of builds and monitor the System health. when I run the jmeter test plan for single thread it works fine, but when I run it with multiple threads each HTTPrequest to trigger the jenkins build should be run for each thread... but it runs only once i.e. each build is triggered only once on jenkins (no matter what is the thread count). In Jmeter test results, it shows that the HTTPRequest is successful for all the threads.. but on Jenkins the build seems to be triggered only for 1 thread group.
Well-behaved JMeter test must represent real system usage, if you want to simulate user clicking Jenkins "Build Now" button you need to send request like:
http://jenkins_host:port/job/jobname/build?delay=0sec
this delay=0sec parameter is uber important as if you don't have it only first request will trigger the job, with this parameter you will have either as many concurrent jobs as available executors:
If there are not enough executors to serve all the jobs, the jobs will be put into queue
You can use JMeter PerfMon Plugin for monitoring Jenkins node health (CPU, RAM, JVM metrics, etc.)

how to kill jenkins job based on build condition and then again put in the build queue?

There is a requirement in our Jenkins CI setup where in we need to kill a particular child job initiated by master job if a certain condition fails or if a particular file is present/not present on the specified path, that is mentioned as a part of the build step that would be executed and also again the same job needs to be put in the build queue so that again next time the condition can be checked and executed accordingly?

Jenkins - cleanup after job

I have a couple of unit testing / BDD jobs on our Jenkins instance that trigger a bunch of processes as they run. I have multiple Windows slaves, any one of which can run my tests.
After the text execution is complete, irrespective of the build status is passed/failed/unstable, I want to run "taskkill" and kill a couple of processes.
I had been doing that earlier by triggering a "Test_Janitor" downstream job - but this approach doesn't work anymore since I added more than one slave.
How can I either run the downstream job on the same slave as the upstream, or have some sort of a post build step to run "taskkill".
You can install the Post Build Task plugin to call a batch script on the slave (when your UT/BDD are completed).
The other solution is to call a downstream job and to pass the %NODE_NAME% variable to this job with the Parameterized Trigger plugin.
Next, you can use psexec to kill the processes on the relevant node.

Jenkins alerts when a remote cron job fails

I'm trying to setup a Jenkins server to monitor a bunch of cron jobs. I will launch most of them using the Jenkins freestyle project however some of the cron jobs will be remote so they will be communicating back as external jobs. How can I get warnings when those external jobs fail? and can I set the schedule they should be on so I can get warnings even when they don't run?
Thanks.
To my understanding and to answer your latter question, you can setup an overall cron scheduler in your Jenkins monitoring job in such a way it runs exactly during the time you expect the remote cron to execute . Say ur remote job executes at 1 pm every day u can configure your Jenkins monitor cron to kick off a build at 12.45 pm or 1 pm based on ur needs.
A solution to your first question could be to have a wrapper script execute the remote invocation of jobs so tht u can monitor the return value of tht remote invocation.
Like if you have an ant script invoke the Java command remotely, it will return a non zero if the Java command failed execution and the ant script will also fail and so your Jenkins job executing the ant script will fail and u will be notified about it.
I can help you with further specific steps if you did not follow me

Resources