My jenkins jobs use testlink to store their test results. The testlink plugin changes the build status to unstable if a test has failed.
However I want the build result to be determind by Xunit plugin in a post-build action because with Xunit you can adapt a failure threshold.
The build should only be unstable if there are new errors.
I was hoping to do the following:
--test--
--testlink -> marked as unstable --
-- groovy scipt --> marked as succes --
build.result = hudson.model.Result.SUCCESS
-- xunit, checks threshold for unstable/succes --
However it seems impossible to change the build status back to success.
So now testlink marks as unstable, and xunit mirros that status.
Is there a way to work around this problem?
Unfortunately, I don't think Jenkins will allow you to do that without an ugly hack.
For example, you can see a comment that clearly states that a result can only get worse in the Jenkins source code
e.g.
462 // result can only get worse
463 if (result==null || r.isWorseThan(result)) {
That being said....
Once the job is done, you can "technically" log on to the master and do whatever you want to already completed builds via changing the build.xmls directly.
For example, you could add a post build job that will go through the files on Jenkins master and do a mass update to replace "<result>UNSTABLE</result>" to "<result>SUCCESS</result>" to turn all builds to success. Once the job is done, forcefully restart the Jenkins server or reload its configuration for the changes to take effect.
I don't recommend this as who knows what will happen to Jenkins if you start going crazy like this. ;)
Related
Ive setup a Jenkins pipeline and defined it to trigger based on a Gerrit event which works fine.
However, every time a build is triggered, it leaves a comment in the gerrit
Jenkins Build.svc Patch Set 1: Build Started
and based on its success and failure, it leaves +1 or -1
How do I stop it from leaving the comment and giving +1/-1 from this particular pipeline ?
I've looked at Jenkins -> Manage Jenkins -> Manage Plugins -> Gerrit triggers but i dont see anything i can configure specific to this pipeline. it looks like they are global
In the Gerrit Trigger section open the Advanced settings and skip voting. That should work.
I've set up an automatic "pull request check" via jenkins/github/sonarqube integration.
The workflow is as follows:
Github pull request created by user → Github Webhook triggers, and calls Jenkins API to execute sonarqube scanner → reports to sonarqube server → sonarqube server calls github API(create commit statuses : ref https://developer.github.com/v3/repos/statuses/) and posts a comment about the PR.
The issue is that it marks the PR as check failed just because it didn't pass its code health checks. The build passed, but the code is "dirty" - and that causes the PR to be marked as unacceptable. I'd like to find a way to prevent code quality checks from appearing as an actual status of the commit, and only allow commenting.
Additional images to provide some context:
SonarQube uses a techuser account token to post its analysis summary as a comment on the PR thread. (Sorry for the black boxes, corporate stuff..)
This functionality is everything we need, nothing more.
However... the plugin does one more thing, which is marking the commit as a failure. Note that we're already using something else to check for actual build failures. Although it didn't fail, sonarqube marking the commit as failure because of code quality makes the whole commit display as a failure. I'd like to prevent sonarqube from setting branch check statuses, while letting it comment on the issue. I couldn't find an option for anything like that neither in jenkins plugin configuration nor sonarqube admin page nor sonarqube scanner script documentation.
Thanks in advance.
What you want to achieve is currently not possible when using the SonarQube GitHub plugin, since this behaviour is hardcoded in the plugin and there is no configuration option to customize this.
In upcoming versions of SonarQube and SonarCloud, pull request will have a built-in support and the behaviour will be the following:
The status will be red if there is at least an open issue on the PR analyzed by SonarQube/SonarCloud
Teams will have the ability to mark those issues as "Confirmed" in SonarQube/SonarCloud (to acknowledge that they accept this technical debt), in which case the status will be automatically turned to green in GitHub
I have a pipeline running, triggered by several gerrit review hooks.
e.g.
branch-v1.0
branch-v2.0
normally i receive my verifies accordingly to the result of the appropriate job run. E.g. run finished successfully with passed tests, i get the verified+1 back in my gerrit system.
My problem:
If there is running a job for verifying my gerrit change, a newer "verify job" of another change or patch, is always canceling the current running job. It doesn't matter whether the change comes from a different branch or not. Also no difference if the new change has something to do with the current one. The current running change is always superseded.
in the console:
In this case the job A canceled an older B and later A was canceled by a newer job C
Canceling older #3128
Waiting for builds [3126]
Canceled since #3130 got here
So, does anybody know how to avoid the canceling of the current running job?
I wanted to use the Multi-Branch pipeline (but i really do not know if this helps), but the gerrit plug-in is currently not supported by the Multi-Branch pipeline or the blue ocean project. As far as i know.
https://issues.jenkins-ci.org/browse/JENKINS-38046
There is a new gerrit plug-in in development, but there is no information when this will be available (or is 'production ready'). See the following comment in the issue.
lucamilanesio added a comment - 2017-08-18 15:40
Thanks for your support!
I am trying to see if there is a plugin that can do what I want or something I am missing with regards to Jenkins triggers. To give you an example of what we want to do let me explain how things are happening currently.
A merge is made
Jenkins picks up on merge, pulling changes on remote build machine
Server is stopped
Build, checks, etc are done
Server is started
So the above is all well and good and working, however what we want to do is trigger the server stop and build after the merge is picked-up by Jenkins. Here is the catch though, it is a large project, with multiple tracks and we could have say 4-10 merges done within a 10-30 minute window. So obviously we do not want to have 4-10 jobs in the queue all running the same thing.
So what would be the best approach to achieving the above, i.e. Jenkins triggers based on merge, say waits for x minutes, if no other merges, then triggers the build process, if new merge reset counter back to x minutes and wait again?
Are there any plugins or triggers built into Jenkins that we can achieve this with? (I couldn't find anything obvious) Or is this a case we need to parameterise the build and have some script running?
Not aware of any plugin which does this. But if you're using the job type Pipeline or willing to convert it to Pipelines, then the following Jenkins pipeline will do the trick:
// Sleep for a certain time, in this case 20 seconds
sleep(20);
// Check if there is a newer build, if there is abort this one.
if (currentBuild.nextBuild != null) {
echo "Got newer build, aborting this one!"
currentBuild.result = Result.NOT_BUILT;
return;
}
// Do the rest of building here
You can run the below command from URL and it serves the purpose.
https:///build?delay=600sec
I have a Jenkins job that should not start building until another job has been built successfully at least once. They are not related per se, so I don't want to use triggers. Is there a way to do this?
Some background: I'm using SCM polling to trigger the second job. I've looked at the Files Found Trigger plugin, but that would keep on triggering the second job after the first on has been built. I've also found the Run Condition Plugin, but that seems to work only on build steps, not on the entire build.
Update - The second job copies artifacts from the first job. As long as the first job has never completed successfully, the Copy Artifact step fails. I am trying to prevent that failure, by not even attempting to build the second job until the first job has completed once.
One solution is to use the Build Flow plugin.
You can create a new flow job and use this DSL:
I've used 10 in the retry section but you can use any numbers.
This flow job can be triggered by monitoring the same SCM URL of your second job.
Update, here is a second solution.
You can use the HTTP Request plugin.
If you want to test that your first job has been built successfully at least once, you can test this URL:
http://your.jenkins.instance/job/your.job/lastSuccessfulBuild/
One example:
As my build has never been successful, the lastSuccessfulBuild URL doesn't exist. The HTTP Request changes my build status to failure.
Does it help?
The Block queued job plugin can be used for this: https://wiki.jenkins-ci.org/display/JENKINS/Block+queued+job+plugin