jenkins pipeline : how to avoid that a gerrit change verify job is superseded by a newer change - jenkins

I have a pipeline running, triggered by several gerrit review hooks.
e.g.
branch-v1.0
branch-v2.0
normally i receive my verifies accordingly to the result of the appropriate job run. E.g. run finished successfully with passed tests, i get the verified+1 back in my gerrit system.
My problem:
If there is running a job for verifying my gerrit change, a newer "verify job" of another change or patch, is always canceling the current running job. It doesn't matter whether the change comes from a different branch or not. Also no difference if the new change has something to do with the current one. The current running change is always superseded.
in the console:
In this case the job A canceled an older B and later A was canceled by a newer job C
Canceling older #3128
Waiting for builds [3126]
Canceled since #3130 got here
So, does anybody know how to avoid the canceling of the current running job?
I wanted to use the Multi-Branch pipeline (but i really do not know if this helps), but the gerrit plug-in is currently not supported by the Multi-Branch pipeline or the blue ocean project. As far as i know.
https://issues.jenkins-ci.org/browse/JENKINS-38046
There is a new gerrit plug-in in development, but there is no information when this will be available (or is 'production ready'). See the following comment in the issue.
lucamilanesio added a comment - 2017-08-18 15:40
Thanks for your support!

Related

Prevent SonarQube from failing Pull Request checks

I've set up an automatic "pull request check" via jenkins/github/sonarqube integration.
The workflow is as follows:
Github pull request created by user → Github Webhook triggers, and calls Jenkins API to execute sonarqube scanner → reports to sonarqube server → sonarqube server calls github API(create commit statuses : ref https://developer.github.com/v3/repos/statuses/) and posts a comment about the PR.
The issue is that it marks the PR as check failed just because it didn't pass its code health checks. The build passed, but the code is "dirty" - and that causes the PR to be marked as unacceptable. I'd like to find a way to prevent code quality checks from appearing as an actual status of the commit, and only allow commenting.
Additional images to provide some context:
SonarQube uses a techuser account token to post its analysis summary as a comment on the PR thread. (Sorry for the black boxes, corporate stuff..)
This functionality is everything we need, nothing more.
However... the plugin does one more thing, which is marking the commit as a failure. Note that we're already using something else to check for actual build failures. Although it didn't fail, sonarqube marking the commit as failure because of code quality makes the whole commit display as a failure. I'd like to prevent sonarqube from setting branch check statuses, while letting it comment on the issue. I couldn't find an option for anything like that neither in jenkins plugin configuration nor sonarqube admin page nor sonarqube scanner script documentation.
Thanks in advance.
What you want to achieve is currently not possible when using the SonarQube GitHub plugin, since this behaviour is hardcoded in the plugin and there is no configuration option to customize this.
In upcoming versions of SonarQube and SonarCloud, pull request will have a built-in support and the behaviour will be the following:
The status will be red if there is at least an open issue on the PR analyzed by SonarQube/SonarCloud
Teams will have the ability to mark those issues as "Confirmed" in SonarQube/SonarCloud (to acknowledge that they accept this technical debt), in which case the status will be automatically turned to green in GitHub

Jenkins pipeline: How to prompt "Do you really want to build?"

We are using Jenkins with Pipeline Jobs and of course the awesome Jenkinsfile.
Twice now a developer accidentally clicked on the build button, which ended up causing a bit of chaos. I am trying to figure out if it is possible to have something like a popup that asks "Do you really want to start this build?".
Any ideas on this user related issue are welcome.
Have a look at the article Controlling the Flow with Stage, Lock, and Milestone in the Jenkins blog, which covers a bit more than only asking for confirmation.
Essentially, there is the input step, which requires user input to continue pipeline execution.
The problem with the input step as suggested by StephenKing is that you won't be able to run the build automatically anymore as it will always ask for the user to confirm the input step manually. This prevents "automatic builds" triggered e.g. by webhooks or CRON jobs.
One workaround is to have a timeout and the build is triggered if the timeout is over. Like that, a user can at least abort an unintended build. But this leads to significantly longer build times.
What we did in my old company was, that we created a so called "parametrized" build, which had a simple checkbox "Do you really want to build this job" that resulted in a flag REALLY_BUILD as an environment variable. You can then ask for ${REALLY_BUILD}==1 in the Jenkinsfile. Every time a developer triggers a build, he has to check the box, otherwise the build will not start / immediately stop.
When you trigger your job via a webhook, you can pass a parameter REALLY_BUILD as an URL parameter (see this comment in the Jenkins tracker) and access it in the Jenkinsfile as before.
Here is another resource for how to use parameters in Pipelines.

Jenkins: Tracing the history of unsaved new test definition (copied from another test definition)?

Recently, in our enterprise production setup, it seems someone has tried to setup a new job / test definition by using another (copying) from identical job. However, (s)he seems to have NOT saved (and probably, am guessing here, closed the browser with the session being lost).
But the new job got saved though it was not set to stable or active; we knew about this because changes uploaded to gerrit, started failing in this newly setup partial job (because, these changes were in certain repos that met certain TDD settings).
Question: Jenkins system does not have trace of who setup the system in 'configure versions' option. Is there anyway to know the details of who setup the job / when was that done ?
No, Jenkins does not store that information by default.
If your Jenkins instance happen to be running behind an Apache or Nginx web server, there might be access logs that can help you. To find out when the job was created you could look at when its config.xml file was created/modified.
However, there are a few plugins that can add this functionality so that you won't have this problem again:
JobConfigHistory Plugin – Tracks changes in your job configurations and gives the ability to restore old versions.
Audit Trail Plugin – Keeps a log of who performed particular Jenkins operations, such as configuring jobs.

Jenkins job wait for first successful build of other job

I have a Jenkins job that should not start building until another job has been built successfully at least once. They are not related per se, so I don't want to use triggers. Is there a way to do this?
Some background: I'm using SCM polling to trigger the second job. I've looked at the Files Found Trigger plugin, but that would keep on triggering the second job after the first on has been built. I've also found the Run Condition Plugin, but that seems to work only on build steps, not on the entire build.
Update - The second job copies artifacts from the first job. As long as the first job has never completed successfully, the Copy Artifact step fails. I am trying to prevent that failure, by not even attempting to build the second job until the first job has completed once.
One solution is to use the Build Flow plugin.
You can create a new flow job and use this DSL:
I've used 10 in the retry section but you can use any numbers.
This flow job can be triggered by monitoring the same SCM URL of your second job.
Update, here is a second solution.
You can use the HTTP Request plugin.
If you want to test that your first job has been built successfully at least once, you can test this URL:
http://your.jenkins.instance/job/your.job/lastSuccessfulBuild/
One example:
As my build has never been successful, the lastSuccessfulBuild URL doesn't exist. The HTTP Request changes my build status to failure.
Does it help?
The Block queued job plugin can be used for this: https://wiki.jenkins-ci.org/display/JENKINS/Block+queued+job+plugin

To get build status through environment variable

I am using jenkins for continous integration.
For my build purpose, i triggering email using Ant task. I am not able to find an environment variable to pass ant for sending email build status(success/failure/stable).
i want to know how can i get environment variable for build status?..if not available, what is the alternative option for build status?
Thanks in Advance
varghese
Why use ANT to send emails from Jenkins when you have two great plugins that does just that?
The default mail notification is quite good, and if you would like to have more control
I suggest using the Email-ext plugin which is very comprehensive.
If still wish to use ANT to sent your mail-notifications including the status
you will have to break your process into two steps,
where the first part runs the build and the second one runs the ANT script to report the status.
In this case, you will need to trigger the second job from the first job via the Parameterized Build plugin -
see my answer here:
trigger other configuration and send current build status with Jenkins
The build status is not set until the job has finished running, so there is no easy way to push build status to a process triggered within the build itself. You could pull build status via the API, but this would have to be an externally triggered process due to the constraint mentioned above. Any reason you aren't using the built in email support or one of the excellent email extension plugins such as this one?

Resources