How to discard old builds in Jenkins, based on their completion status? - jenkins

I have a Jenkins job set up to poll for a change in a git repository every 10 minutes. If it doesn't find one (99/100 times, this is what happens) it aborts the build early and marks it as UNSTABLE . If it finds a change, it goes through with it and marks it as SUCCESS, storing its artifacts. I know I can use a plugin to discard old builds, but it only allows for these options:
As you can see, there is no option to filter by completion status.
Ideally, I want to discard all but the latest UNSTABLE build and keep all SUCCESS or FAILED builds and their artifacts. If this is not possible, simply discarding all UNSTABLE builds would also work.
Note: I am using a declarative Pipeline

One possibility would be to discard builds programmatically. Get your job object with def job = Jenkins.instance.getItem("JobName") Since you are using declarative pipeline, job is of type WorkflowJob [1] and you can get all its builds with
job.getBuilds(). Now you can check the result of each build (WorkflowRun objects [2]) and decide if you want to delete it or not. Something like follows should work
def job = Jenkins.instance.getItem("JobName")
job.getBuilds().each {
if(it.result.toString() == "UNSTABLE") {
it.delete()
job.save()
}
}
You could create a new job that executes the code above and is triggered after YourJob has been built.
[1] https://javadoc.jenkins.io/plugin/workflow-job/org/jenkinsci/plugins/workflow/job/WorkflowJob.html
[2] https://javadoc.jenkins.io/plugin/workflow-job/org/jenkinsci/plugins/workflow/job/WorkflowRun.html

Related

PullRequest Build Validation with Jenkins and OnPrem Az-Devops

First off the setup in question:
A Jenkins Instance with several build nodes and on prem Azure-Devops server containing the Git Repositories.
The Repo in question is too large to always build on push for all branches and all devs, so a small workaround was done:
The production branches have a polling enabled twice a day (because of testing duration which is handled downstream more builds would not help with quality)
All other branches have their automated building suppressed. They still can start it manually for Builds/Deployments/Unittests if they so choose.
The jenkinsfile has parameterization for which platforms to build, on prod* all the platforms are true, on all other branches false.
This helps because else the initial build of a feature branch would always build/deploy locally all platforms which would take too much of a load on the server infrastructure.
I added a service endpoint for Jenkins in the Azure Devops, added a Buildvalidation .yml - this basically works because when I call the sourcebranch of the pull request with the merge commitID i added a parameter
isPullRequestBuild which contains the ID of the PR.
snippet of the yml:
- task: JenkinsQueueJob#2
inputs:
serverEndpoint: 'MyServerEndpoint'
jobName: 'MyJob'
isMultibranchJob: true
captureConsole: true
capturePipeline: true
isParameterizedJob: true
multibranchPipelineBranch: $(System.PullRequest.SourceBranch)
jobParameters: |
stepsToPerform=Build
runUnittest=true
pullRequestID=$(System.PullRequest.PullRequestId)
Snippet of the Jenkinsfile:
def isPullRequest = false
if ( params.pullRequestID?.trim() )
{
isPullRequest = true
//do stuff to change how the pipeline should react.
}
In the jenkinsfile I look whether the parameter is not empty and reset the platforms to build to basically all and to run the unittests.
The problem is: if the branch has never run, Jenkins does not already know the parameter in the first run, so it is ignored, building nothing, and returning with 0 because "nothing had to be done".
Is there any way to only run the jenkins build if it hasnt run already?
Or is it possible to get information from the remote call if this was the build with ID 1?
The only other thing would be to Call the Jenkins via web api and check for the last successful build, but in that case I would have have the token somewhere stored in source control.
Am I missing something obvious here? I dont want to trigger the feature branch builds to do nothing more than once, because Devs could lose useful information about their started builds/deployments.
Any ideas appreciated
To whom it may concern with similar problems:
In the end I used the following workaround:
The Jenkins Endpoint is called via a user that only is used for automated builds. So, in case that this user triggered the build, I set everything to run a Pull Request Validation, even if it is the first build. Along the lines of
def causes = currentBuild.getBuildCauses('hudson.model.Cause$UserIdCause')
if (causes != null)
{
def buildCauses= readJSON text: currentBuild.getBuildCauses('hudson.model.Cause$UserIdCause').toString()
buildCauses.each
{
buildCause ->
if (buildCause['userId'] == "theNameOfMyBuildUser")
{
triggeredByAzureDevops = true
}
}
}
getBuildcauses must be allowed to run by a Jenkins Admin for that to work.

Jenkins - run long job nightly if new work done?

Right now, I have two sets of benchmarks, a short one and a long one. The short one runs on checkin for every branch. Which set to run is a parameter - SHORT or LONG. The long one always runs nightly on the dev branch. How can I trigger other branches to build and run the long benchmark if the branch was built successfully today?
If you want to run those long tests only over the night - I find it easiest to just duplicate the job and modify it so its triggered in the night and has additional checks added after the normal job, I.e. your post-commit jobs just do the short test, the nightly triggered do the short first and then (if no errors) the long one.
I find that much easier to handle then the added complexity of chaining jobs on some condition, like evaluating time of day to skip some tests.
Example 1st job that runs after every commit
node() {
stage('Build') {
// Build
}
stage('Short Test') {
// Short Test
}
}
2nd job that triggers nightly
node() {
stage('Build') {
// Build
}
stage('Short Test') {
// Short Test, fail the build here when not successful
}
stage('Long Tests')
// Long Test, runs only when short test successful
}
}
Edit
A solution that got it all in a single job, however it adds alot of complexity and makes some followup use cases harder to integrate, i.e. different notification for the integration test branch, tracking of build durations etc. I still find it more manageable to have it split in 2 jobs.
Following job must be configured to be triggered by post commit hook and a nightly timer. It runs the long test when
the last build is younger then set (you dont want it to trigger from the last nightly),
last run was successful (dont want to run long test for a broken build), and
was triggered by said timer (dont want to trigger on a check in).
def runLongTestMaxDiffMillis = 20000
def lastRunDiff = (currentBuild.getStartTimeInMillis().toInteger() - currentBuild.getPreviousBuild().getStartTimeInMillis().toInteger())
def lastBuildTooOld = (lastRunDiff > runLongTestMaxDiffMillis)
def isTriggeredByTimer = currentBuild.getBuildCauses('hudson.triggers.TimerTrigger$TimerTriggerCause')
def lastBuildSuccessful = (currentBuild.getPreviousBuild().getResult() == 'SUCCESS')
def runLongTest = (!lastBuildTooOld && isTriggeredByTimer && lastBuildSuccessful)
node() {
if (runLongTest) {
println 'Running long test'
} else {
println 'Skipping long test'
}
}
You can create another pipeline that calls the parameterized pipeline with the LONG parameter, for example:
stage('long benchmark') {
build job: 'your-benchmark-pipeline', parameters: [string(name: 'type', value: 'LONG')]
}
When you configure this new pipeline you can tick the Build after other projects are built checkbox in the Build Triggers section and choose which short benchmarks should trigger it once they complete successfully (the default behavior).
You can use the Schedule Build Plugin to schedule a build of the long job when the short job succeed.
The short job runs on every branch, when a build succeed for a certain branch, it schedules a build of the long job (in the night) with the branch in parameter, so the long job will run on this particular branch.

jenkins, how to run multiple remote jobs without stopping on failure

I have a jenkins job that I'm using to aggregate the execution of multiple other jobs that only perform testing. Because they are testing, I want all the jobs to run regardless of any failures. I do want to keep track of wether or not there has been a failure so that I can set the end result to FAILURE rather than SUCCESS if need be.
At the moment I am calling 1 remote job via bash script and jenkins-cli. I have a 2nd child job that is local, so I'm using "trigger/call builds on other jobs" build step to run that one.
Any ideas on how to accomplish this?
If you can use build_flow-plugin it is easy, if you use pipeline it is possible too but can't give you example. Have to look it up if that is the case.
https://wiki.jenkins-ci.org/display/JENKINS/Build+Flow+Plugin:
def result = SUCCESS
ignore(FAILURE){
def job1 = build('job1')
result = job1.result.combine(result)
}
ignore(FAILURE){
def job2 = build('job2')
result = job1.result.combine(result)
}
build.result = result.combine(build.result)
http://javadoc.jenkins.io/hudson/model/Result.html

Triggering build information available in child job

I have a job in Jenkins called notification_job which uses the "Build after other projects are built" trigger. The list of jobs will be around 25 and continue to grow. The notification_job needs to know the triggering build's name and build number.
I would like all of the configuration to be done through the notification_job, not through the triggering jobs as that list will grow and become a pain to manage. So how can I retrieve the build name and number in the child job?
Jenkins version is 2.19.3
Thank you,
Duke
I was able to pull the data with a groovy script
import hudson.model.Cause
for (cause in build.getCauses()) {
if (cause instanceof Cause.UpstreamCause) {
println cause.getUpstreamProject()
println cause.getUpstreamBuild()
}
}

Multiple concurrent builds of the same project in Jenkins

On my team, we have a project that we want to do continuous-integration-style testing on. Our build takes around 2 hours and is triggered by the "Poll SCM" trigger (using Perforce as the server), and we have two build nodes.
Currently, if someone checks in a change, one build node will start up pretty much right away, but if another change gets checked in, the other node will not kick in, as it's waiting for the previous job to finish. However, I could like the other build node to start a build with the newer checkin as soon as possible, so that we can maximize the amount of continuous testing that's occurring (so that if e.g. one build fails we know sooner rather than later).
Is there any simple way to configure a Jenkins job (using Poll SCM against a Perforce server) to not block while another instance of the job is already running?
Unfortunately, due to the nature of the project it's not possible to simply break the project up into multiple build jobs that get pipelined across multiple slaves (as much as I'd like to change it to work in this way).
Use the "Execute concurrent builds if necessary" option in Jenkins configuration.
Just to register here in case someone needs it, in the version I'm using (Jenkins 2.249.3) I had to uncheck the option Do not allow concurrent builds in the child job that is called multiple times from the parent job.
The code is more or less like that:
stage('STAGE IN THE PARENT JOB') {
def subParallelJobs = [:]
LIST_OF_PARAMETERS = LIST_OF_PARAMETERS.split(",")
for (int i = 0; i < LIST_OF_PARAMETERS.size(); i++) {
MY_PARAMETER_VALUE = LIST_OF_PARAMETERS[i].trim()
MY_KEY_USING_THE_PARAMETER_TO_MAKE_IT_UNIQUE = "JOB_KEY_${MY_PARAMETER_VALUE}"
def jobParams = [ string(name: 'MY_JOB_PARAMETER', value: MY_PARAMETER_VALUE) ]
subParallelJobs.put("MY_KEY_USING_THE_PARAMETER_TO_MAKE_IT_UNIQUE", {build (job: "MY_CHILD_JOB", parameters: jobParams)})
}
parallel(subParallelJobs)
}
}

Resources