I've got my Jenkins setup to run multibranch pipeline project where I use API to trigger specific branches in it.
my problem is that it only works if the branch was indexed, otherwise I get 404 from the API.
I have managed to run the branch indexing via the API, but I have no indication of when this task has been completed
by trail and error I have been able to figure out that:
the API call to branch indexing does not return queue ID so I cannot check what its status
the indexing task runs immediately, regardless of the queue and the executes limitation.
I have been unable to run this task from pipeline (other job), trying to run this task results in error
is there any way to tell if the task has completed, and the success value of it?
Related
We set up Gitlab with Jenkins integration, by using Jenkins Gitlab plugin and trigerring Jenkins webhooks (regular Pipeline type job) on gitlab Merge Request events (configured in Gitlab->Repo->Integrations) and we are successfully displaying the job build status on the Merge Request page (by using updateGitlabCommitStatus in the pipeline) - it is displayed as a status of some pipeline, which as I understand, is created and associated with the last commit in the source branch.
At some point, I canceled this pipeline from the MR page and after that, closed and reopened the MR, thus re-triggering the build.
Unfortunately, after cancelling the pipeline, the latest build job statuses were not reflected nor in the MR, nor in the pipeline itself. In the pipeline page it wouldn't even display the newest jobs running in jenkins.
I tried deleting this specific pipeline (via curl - we are using gitlab 12.3, which doesn't allow deleting pipelines via GUI) and creating a new Merge Request (same branch, same commit), hoping that a new pipeline would be created in this case, but nothing. It seems that I have no means to display build status again for this specific commit.
Any suggestions how to overcome this?
Thanks in advance!
I have a similar case and the only way to do this is to re-run the pipeline from GitLab... You have to enter to the integrations and there you should look for all request sent to the Jenkins. Once you locate the correct one, you click on resend and it should give you the correct status.
For my observations, the update GitLab status command only work when it is invoked from a webhook.
I have the Jenkins pipeline which contains series of job (for testing using Selenium & Cucumber BDD). Every time we run the pipeline, even the functional test is passed (i called it test status) it takes time for saving the artifacts then job is considered to be PASSED (this i called Job status). So let's say for a simple test that take only 1 minute to run , but saving the artifacts from Jenkins slave to Jenkins master take around the same time or more before it's considered to be passed. In regarding to the fast feedback to the team while running these jobs, it slows down the whole flow.
So, I wonder if there's way that i can modify or config for the post-build actions to send the test status to the pipeline right after running the test (but still saving the artifacts ? )
I just configured the post-build actions:
Archive the artifacts - File to archive : **
My expectation, basically is, the test status (passed/failed) will be parsed right away to Pipeline build scripts, so that the pipeline script will 'acknowledge' it way faster.
As per my understanding without upload completion, success or failure status can't be sent to the upstream job.
I'm trying to set up a scenario where a pull request is created on github that triggers a Jenkins multibranch pipeline, and where that multibranch pipeline uses the Generic Webhook Plugin to extract values from the POST request sent from github to jenkins to be used in the script.
Unfortunately, as described on the Generic Webhook Trigger Plugin wiki:
Note: When configuring from pipeline, that pipeline needs to run once, to apply the plugin trigger config, and after that this plugin will be able to trigger the job. This is how Jenkins works, not something implemented in this plugin. You can avoid this by using Job DSL and have Job DSL create pipeline jobs with the plugin configured in that DSL.
This would be OK using a normal pipeline since it would just be a one off on creation of the Jenkins job. The problem however is that a multibranch pipeline will create a new job whenever a new branch/PR is created, and that means that for each pull request I create on github (which triggers my multibranch pipeline script), I have to then run it twice to get the generic webhook functionality working. Having to resubmit for each PR would be tedious for long-run projects.
It seems to me like there are two possible approaches to solving/improving on this problem. One is to try and play around with DSL Jobs (as suggested by the wiki); but I tried this and couldn't get it to work (it was adding a huge amount of complexity to the set up, so I've abandoned it for now).
The second possible solution is as follows: when a PR is created in github, the Generic Webhook will cause a new job to be created in the multibranch pipeline corresponding to that PR; the first time the multibranch pipeline runs the first build of this newly created job will fail for the reason given in the quote above; but then a solution might involve testing that the first job failed and somehow telling Jenkins to try rebuilding for that job again.
So my question relates to this second approach: how can I most neatly run a rebuild for this multibranch pipeline upon the creation of a PR on github?
Any advice/suggestions would be appreciated!
For triggering multibranch pipline by webhook you can use this plugin:
"Multibranch Scan Webhook Trigger"
https://plugins.jenkins.io/multibranch-scan-webhook-trigger/
Actually that is not true for multibranch pipelines. Just ordinary pipelines needs to run twice.
I updated the docs like:
When configuring from pipeline (not multibranch pipeline)...
I want to do a very simple thing - for every new pull request that is being created under my repo, I want to create a new jenkins job with similer configuration (run some batch), that will checkout the branch that is being merged (not the destination branch).
I will also like to delete this job after the pull request is approved, but that's not as important.
How do I do that? Every jenkins plugin that I found creates jobs for all my branches, or for a specified list of branches, instead of just for new ones or just unmerged pull requests
Here is one way you could solve this:
Create a template job containing the logic you want to do for each new branch (i.e. run some batch).
Create a job that is started for every new pull request in your repo. You could probably do this with the Script SCM Plugin using a short groovy script.
Inside this triggered job, clone the job in #1 using the Jobcopy Plugin. Replace any strings (e.g. git url) with whatever is needed to get the job working.
You could write another job that is triggered via the Script SCM Plugin when a branch needs to be deleted. You can remove it using the Groovy Postbuild Plugin.
OK I finally succeseeded, and it was WAY EASIER then I thought. I found a jenkins plugin called "Bitbucket pullrequest builder plugin", and it makes it incrediblly ease to build jobs for pull requests. The only thing is that I couldn't make it work with any OAuth consumer, and had to give it my own credentials. But other then that it works beautifully.
This is very similar with what we did in our team (We have more than 10 development branches and also a lot release branches)
I think the easiest way is as follows:
Plugin should be used:
gerrit trigger plugin
Used to get triggered when there is new commit come in
job dsl plugin
Used to generate the jobs based on the dsl script
build flow plugin
Used to define the execution flow
Create a Jenkins build flow job "EntryPoint" (This job will be triggered if there is a new commit push for review)
Create the a job generator job (This job will invoke the job dsl script to generate the template jobs based on the input parameter, such as branch)
Create a new job to do the cleanup work or as Daniel said, you can do it with groovy post build
Inside the build flow job, a simple flow as follows
//Get current branch from gerrit trigger plugin
def currentBranch = params[GERRIT_BRANCH]
//Invoke job generator job and pass the branch info to it
build ("job_generator",BRANCH:currentBranch )
//Invoke the generated job by job_generator
build("$currentBrnch_Build")
//Remove the generated job
build("CleanUpJob")
I've got a Jenkins job that is intended to do the following:
Build a project and deploy it to a test server
Run tests
If the tests fail, roll back the server to the previous version
If the tests succeed, update the version in our source control system
Because we have a single test server, we need to ensure that Jenkins is only running a single version of this job at a time. Unfortunately, we can't seem to find a way to run a job on failure and keep the upstream job from executing while the downstream job is running.
Is there an easy way to do this? Is there a better way?
The Jenkins Post Build Task allows you to run tasks in a job after failure. Rolling the server back sounds more like a task than a job, so that might suit.
Otherwise, there are a couple of plugins that allow for more complex pipelining features. The Pipeline Plugin seems to be the most popular at the moment.
In job configuration, under Advanced Project Options (just before the SCM part), click the Advanced... button. You can now chose to Block build when upstream/downstream is executing
As for running conditional steps on failure:
- Use Post Build Tasks as Paul suggested, or
- Configure logic using Conditional Build steps