Our Jenkins performs massive integration tests. The longer the jenkins is running, the longer the tests need. Thus we restart the Jenkins server every night via cronjob. Meanwhile the build queue is too long to be finished and the currently running job is canceled and a fail. That's ugly. I found the Safe Restart Plugin, but that waits for empty build queues. Ideally I would have a job which could be priorized to also reboot at every wanted time during the day. This job needs to perform the reboot as the safe restart plugin would do if no jobs where left.
Related
We are moving over our build system from Hudson to Jenkins and also to declarative pipelines in SCM. Alas, it looks like there are some hiccups. In Hudson, when a job was scheduled and waiting in the queue, no new jobs were scheduled for that project, which makes all the sense. In Jenkins, however, I observe there are e.g. 5 instances of a job started at the same time, triggered by various upstream or SCM change events. They have all even kind of started, one of them is actually running on the build node and the rest are waiting in "Waiting for next available executor on (build node)". When the build node becomes available, they all dutifully start running in turn and all dutifully run through, most of them without no purpose at all as there are no more changes, and this all takes a huge amount of time.
The declarative pipeline script in SCM starts with the node declaration:
pipeline {
agent {
label 'BuildWin6'
}
...
I guess the actual problem is that Jenkins starts to run these jobs even though the specified build node is busy. Maybe it thinks I might have changed the Jenkinsfile in the SCM and specified another build node to run the thing on? Anyway, how to avoid this? This is probably something obvious as googling does not reveal any similar complaints.
For the record, answering myself. It looks like the best solution is to define another trigger job which is triggered itself by SCM changes. It should do nothing else, only checks out the needed svn repos (with depthOption: 'empty' for space and speed). The job needs to be bound to run on the same agent than the main job.
The main job is triggered only by the first job, not by SCM changes. Now if the main job is building for an hour, and there are 10 svn commits during that time, Jenkins will schedule 10 trigger job builds to run. They are all waiting in the queue as the agent is busy. When the agent becomes available, they all run quickly through and trigger the main job. The main job is triggered only once, for that one must ensure its grace/quiet period is larger than the trigger job run time.
I want to upgrade my jenkins master without aborting or waiting for long running jobs to finish on slaves. Is there a plugin available that provides this feature?
We have several build jobs running regression and integration tests which take hours to run. Often, at least one of those jobs is running, making it hard to restart jenkins after updates. I know, that it is poosible to block the queue. We tried this, but it hinders more than it helps.
What we are looking for is a plugin, that runs jobs on slaves, caches the output as soon as the connection to the master is interrupted and sends the remaining output to the master when the master is up again. Does anybody know a plugin providing this feature.
I am trying to run some automated acceptance tests on a windows VM but am running into some problems.
Here is what I want, a job which runs on a freshly reverted VM all the time. This job will get an MSI installer from an upstream job, install it, and then run some automated tests on it, in this case using robotframework (but that doesn't really matter in this case)
I have setup the slave in the vSphere plugin to only have one executor and to disconnect after one execution. On disconnect is shutsdown and reverts. My hope was this meant that it would run one Jenkins job and then revert, the next job would get a fresh snapshot, and so would the next and so on.
The problem is if a job is in queue waiting for the VM slave, as soon as the first job finishes the next one starts, before the VM has shutdown and reverted. The signal to shutdown and revert has however been sent, so the next job is almost immedieatly failed as the VM shuts down.
Everything works fine as long as jobs needing the VM aren't queued while another is running, but if they are I run into this problem.
Can anyone suggest a way to fix this?
Am I better off using vSphere build steps rather than setting up a build slave in this fashion, if so how exactly do I go about getting the same workflow to work using buildsteps and (i assume) pipelined builds.
Thanks
You can set a 'Quiet period' - it's in Advanced Project Options when you create a build. You should set it at the parent job, and this is the time to wait before executing the dependent job
If you'll increase the wait time, the server will go down before the second job starts...
Turns out the version of the vSphere plugin I was using was outdated, this bug problem is fixed in the newer version
I have a Jenkins job which compiles and publishes our Java project to a JBoss server. Obviously, the server takes time to start and deploy the new code. I have a second Jenkins job that runs Selenium tests against the running JBoss instance.
I would like to make the second (Selenium) job be performed automatically as a post-build action from the first job (I have already done this), but I want it to be delayed by, say, 2 minutes. The amount of delay time isn't important, but I can't find anywhere that describes how to delay the start of a post-build job. How would I accomplish this?
In the advanced project options of a project configuration, you can set a "quiet period" that does exactly that. Jenkins will wait the specified amount of time after a build has been triggered before actually starting the build.
Alternatively, you could have the JBoss server trigger the build (e.g. by calling a URL) once it's up and running. The advantage of that is what it would take care of cases where the JBoss server doesn't start for some reason.
You might also want to have a look at the Parameterized Trigger Plugin which allows you to run builds of other projects as build steps. This way you could run the Selenium tests as part of the original job and fail if those tests fail.
Is there an elegant way to temporarily prevent Jenkins from executing any further builds in a defined time frame (say e.g. daily between 6am and 7am)?
Rather than stopping Jenkins, you can put it into "Quiet Down" mode, which prevents any new builds from taking place.
You can enable this via the URLs /quietDown and /cancelQuietDown, or via the CLI commands [cancel-]quiet-down.
Depening on what you want to achieve exactly you can probably use the Exclusive Execution Plugin. This plugin allows you to schedule a job which will block execution of all other jobs by putting Jenkins in shutdown mode (which is canceled when the job is done). You can make this job start at 6am and make ik run a simple ant script which sleeps for an hour.
However, if you are trying to use that window to e.g. run a backup you could actually run your backup from within that job, which will make 100% sure your backup won't start until all
running jobs are completed and it will make Jenkins available again as soon as the backup is done.
Alternatively you could consider using cron or the windows scheduler (depending on your OS) to stop Jenkins completely at 6am and restart it at 7am.