I use jenkins-job-dsl plugin. Created seed job to run myJobs.jenkins_jobs file, inside which I have written job job_template and another job, which is using 'job_template'. However, after building seed job, it continues to build again and again, until I disable it.
In https://jenkinsci.github.io/job-dsl-plugin/#path/job-using I see
Creates a new job configuration, based on the job template referenced by the parameter and stores this. When the template is changed, the seed job will attempt to re-run, which has the side-effect of cascading changes of the template the jobs generated from it.
However, I'm not sure what could I do to get rid of this constantly rebuilding.
My myJobs.jenkins_jobs file looks like this:
job('job_template'){
}
job('railgun-db-importer-DSL') {
using 'job_template'
}
SOLUTION
The error was that template job had 'description' field updated with date after every run - this caused it to change every run, and run again every run. After putting separate 'description' in every job and hardcoding template job's description, so it doesn't change itself upon run, I got rid of perpetual runs.
You must not maintain template jobs by job-dsl.
The idea behind a template job is that you can create new jobs via job-dsl, based on an existing job that's not maintained by job-dsl (this is the template job).
Typically, you want to do that if there's some complex plugin configuration which is hard to implement in job-dsl directly -- in those cases, it can be more simple to create a template job manually, and use it as a basis for further configuration via job-dsl.
In your example, every DSL run will touch the template job; since modifications of the template job will trigger DSL again, this can lead to the infinite loop that you observe.
Related
I know about the rebuild and replay functionality, but both of them are manually triggers. So here is our problem:
We have multiple servers which can be deployed with any branch that exists. But this deploy is manually. But we want to ensure, that at least once a day, the latest version of that branch is deployed to avoid having servers being outdated.
So what I want to do it, create a scheduler job that runs once a day and triggers a Jenkins job to rebuild the last job using the exact same parameters.
Would be great if someone has some input here :-)
You can try out the Persistent Parameter plugin and use it to define the relevant parameters inside the deploy job that you want to reuse.
This plugin enables you to set you input parameters (string, text, boolean and choice) with default values taken from the previous build - so every time you run the build with parameters manually or trigger it from another job the values that will be used are the values from the last execution.
Your caller job can still pass parameters to the deploy job during the daily execution - but for all parameters that are not passed their latest value will be used.
You can also override parameters defined as persistent as it just affects the default value.
I configured a job on Jenkins which trigger some script in python and does some tests. In the end, it gives results inside an artifact folder (so we can keep X amount of results at any time), which I am showing presenting with a Publish rich text message post-build action. Here is the pretty straight forward code:
<br/>
<h2>Results:</h2>
<iframe src=./artifact/artifacts/results.html width=1100 height=1000></iframe>
This works correctly inside a job build, but if you go inside the main page of the job, I get a 404 not found since it cannot find the file. I understand that the error is correct, since it cannot find any artifact folder if we are not within a build of the said job. My question is how can I have different settings depending if we are inside the main page of the job vs a build of the job ?
I checked online and found that there is a lastSuccessfulBuild variable I can use but then I think it will show the same results no matter which build we are, which is something I do not want.
I have create job template at Jenkins and created jobs based out of template. But when I added properties: Set Job properties -> Build Triggers -> Build periodically code to schedule jenkins job execution, the relation of job with template is removing after first execution and job became stand alone job. Whatever change I made to template is not picking by the job after that. Is there a option to schedule jobs while using job template?
If I understand correctly if you create a new job after modification of template , the changes are reflected but the changes are not reflected in the existing job?
It could be a bug in older version of jenkins. Try to configure the existing jobs and save it again to make the changes reflected.
I am new to Jenkins plugin development. M trying to write a plugin that should be executed before any Multi configuration type job runs in Jenkins.
In this plugin I want to write rules that will check what configuration parameters user has selected while submitting the job, based on selected parameters, I want to decide whether to allow the job to run or to restrict it.
User should be shown reason as to why that job cannot be run in the Console Output.
Does anyone have any ideas which class I need to extend or which interface I need to implement in order to get a hook into Jenkins job run?
You could look at the Matrix Execution Strategy which allows for a groovy script to select which matrix combinations to run. I would think if your script threw an exception it would stop the build.
For background, the multi configuration projects run a control job (or flyweight) which runs the SCM phase then starts all the actual combinations. This plugin runs after the flyweight SCM checkout.
If nothing else, this will give you a working plugin to start from
Disclaimer: I wrote this plugin
Blocked queue job plugin was what I needed
Out of the box that plugin supports two ways to block the jobs -
Based on the result of last run of another project.
Based on result of last run of the current project
In that plugin the BlockQueueItemTaskDispatcher.java extends Jenkin's QueueTaskDispatcher providing us a hook into Jenkins logic to allow or block the jobs present in the queue from running.
I used this plugin as a starting point for developing a new plugin that allows us to restrict project based on parameters selected and the current time. Ultimate goal is to restrict production migrations from running during the day.
Overriding the isBlocked() method of QueueTaskDispatcher gave access to hudson.model.Queue.Item instance as an argument to me. Then I used the Item instance's getParams method to get access to build parameters selected by the user at runtime. Parsed the lifecyle value from it. Checked the current time. If lifecycle was Production and current time was day time then restricted the job by returning non null CauseOfBlockage from isBlocked() method. If that condition was false, then returnedCauseOfBlockage as null allowing the queued job to run.
I need to build in jenkins only if there has been any change in ClearCase stream. I want to check it also in nightly or when someone choose to build manually, and to stop the build completely if there are no changes.
I tried the poll SCM but it doesn't seem to work well...
Any suggestion?
If it is possible, you should monitor the update of a snapshot view and, if the log of said update reveal any new files loaded, trigger the Jnekins job.
You find a similar approach in this thread.
You don't want to do something like that in a checkin trigger. It runs on the users client and will slow tings down, not to mention that you'd somehow have to figure out how to give every client access to that snapshot view.
What can work is a cron or scheduled job that runs lshistory and does something when it finds new checkins.
Yes you could do this via trigger, but I'd suggest a combo of trigger and additional script.. since updating the snapshot view might be time-consuming and effect checkins...
Create a simple trigger that when the files you are concerned about are changed on a stream will fire..
The script should "touch/create" a file in some well-known network location (or perhaps write to a pipe)...
the other script could be some cron (unix) or AT (windows) job that runs continually or each minute and if the well-known file is there will perform the update of the snapshot view..
The script could also read the Pipe written to by the trigger if you go that route
This is better than a cron job that has to do an lshistory each time.. but Martina was right to suggest not doing the whole thing in a trigger for performance and snapshot view accessability for all clients.. but a trigger to write to a pipe or write some empty file is efficient and the cron/AT job that actually does the update is effieicnet as it does not have to query the VOB each minute... just the file (or only after there is info on the pipe)..