I have a pipeline that I want to run e2e test after deploying to uat environment, the e2e test runs on gitlab. What I want to do is to let gitlab use webhook to trigger jenkins to decide if the build could go to production or not. So the pipline looks as below.
After deploying to uat, I can send the webhook to trigger e2e test on gitlab. And gitlab can send the webhook to jenkins, however, as the pipeline job e2e received the webhook and build success. It does not change the pipeline job status, and hence, we can not proceed to deploy to production unless we manually click the trigger on e2e job. And I already try to use currentBuild.currentResult and currentBuild.result which doesn't seem to work.
What should I do to solve this?
Update
I am not using the new pipeline way with groovy, instead, I'm doing it the old way like this. But I supposed this is not related to the issue I'm trying to solve here.
Let me provide more details about the problem. After e2e test is done, I send below request from gitlab to jenkins
curl "SERVER_URL/job/MY_XXX_PROJECT/view/Pipeline/job/X.X%20E2E%20my_e2e_test_result/buildWithParameters?token=xxxxxxxxxxxxxx&PARENT_BUILD_NUMBER=123¤tBuild.currentResult=SUCCESS¤tBuild.Result=SUCCESS"
And here is the pipeline, and if I check the job in SERVER_URL/job/MY_XXX_PROJECT/view/Pipeline/job/X.X%20E2E%20my_e2e_test_result/. The job is actually built SUCCESS.
However, the status shows like nothing happened, and we cannot proceed to deploy production without manually clicking the trigger button on the same job(e2e) again.
If you're just looking for a way for the job to update its own state, use the unstable and the error steps.
pipeline {
agent any
stages {
stage('Hello') {
steps {
echo 'This job is now a success. All jobs are a success until marked otherwise'.
//success 'This will fail. There's no such thing as a success step'
unstable 'This stage is now unstable'
error 'This job is now failed'
unstable 'This job is still failed. Build results can only get worse. They can never be improved.'
}
}
}
}
Notice that there's no such thing as a success step. Probably because all jobs are a success until told otherwise. And the overall result of a job cannot be improved. See the documentation for catchError here.
Note that the build result can only get worse, so you cannot change the result to SUCCESS if the current result is UNSTABLE or worse.
Related
Can you please help, I have the following scenario and I went through many videos, blogs but could not find anything matching with my use-case
Requirement:
To write a CI\CD pipeline in GitLab, which can facilitate the following stages in this order
- verify # unit test, sonarqube, pages
- build # package
- publish # copy artifact in repository
- deploy # Deploy artifact on runtime in an test environment
- integration # run postman\integration tests
All other stages are fine and working but for the deploy stage, because of a few restrictions I have to submit an existing Jenkins job using Jenkin remote API with the following script but the problem that script returns an asynchronous response and start the Jenkins job and deploy stage completes and it moves to next stage (integration).
Run Jenkins Job:
image: maven:3-jdk-8
tags:
- java
environment: development
stage: deploy
script:
- artifact_no=$(grep -m1 '<version>' pom.xml | grep -oP '(?<=>).*(?=<)')
- curl -X POST http://myhost:8081/job/fpp/view/categorized/job/fpp_PREP_party/build --user mkumar:1121053c6b6d19bf0b3c1d6ab604f22867 --data-urlencode json="{\"parameter\":[{\"name\":\"app_version\",\"value\":\"$artifact_no\"}]}"
Note: Using GitLab CE edition and Jenkins CI project service is not available.
I am looking for a possible way of triggering the Jenkins job from the pipeline and only on successful completion of the Jenkins job my integration stage starts executing.
Thanks for the help!
Retrieving the status of a Jenkins job that is triggered programmatically through the remote access API is notorious for not being quite convoluted.
Normally you would expect to receive in the response header, under the Location attribute, a url that you can poll to get the status of your request, but unfortunately there are some in-between steps to reach that point. You can find a guide in this post. You may also have a look in this older post.
Once you have the url, you can pool and parse the status job and either sh "exit 1" or sh "exit 0" in your script to force the job that is invoking the external job to fail or succeed, depending on how you want to assert the result of the remote job
I have created a jenkins pipeline job called "pipelinejob" with the below script:
pipeline {
agent any
stages {
stage ('Setup'){
steps{
//echo "${BRANCH_NAME}"
echo "${env.BRANCH_NAME}"
//echo "${GIT_BRANCH}"
echo "${env.GIT_BRANCH}"
}
}
}
}
Under General, I have selected "GitHub project" and inserted my company's github in the form:
https://github.mycompany.com/MYPROJECTNAME/MY_REPOSITORY_NAME/
Under Build Triggers, i have checked "GitHub hook trigger for GITScm polling
I have created a simple job called "simplejob" with same configuration as 1) and 2)
In my company's Github, i have created a webhook like "jenkins_url/jenkins/github-webhook/"
I commit a change in "mybranch" in "MY_REPOSITORY_NAME"
My simple job "simplejob" is triggered and built successfully
My pipeline job "pipelinejob" is not triggered
In Jenkins log i see the below:
Sep 12, 2019 2:42:45 PM INFO org.jenkinsci.plugins.github.webhook.subscriber.DefaultPushGHEventSubscriber$1 run
Poked simplejob
Nothing regarding my "pipelinejob".
Could you please point me to the right directions as to what to check next?
P.S. I have manually executed my "pipelinejob" successfully
I wasted two days of work on this, as none of the previous solutions worked for me. :-(
Eventually I found the solution on another forum:
The problem is that if you use a Jenkinsfile that is stored in GitHub, along with your project sources, then this trigger must be configured in the Jenkinsfile itself, not in the Jenkins or project configuration.
So add a triggers {} block like this to your Jenkinsfile:
pipeline {
agent any
triggers {
githubPush()
}
stages {
...
}
}
Then...
Push your Jenkinsfile into GitHub
Run one build manually, to let Jenkins know about your will to use this trigger.
You'll notice that the "GitHub hook trigger for GITScm polling" checkbox will be checked at last!
Restart Jenkins.
The next push should trigger an automated build at last!
On the left side-pane of your pipeline job, click GitHub Hook log. If it says 'Polling has not run yet', you will need to manually trigger the pipeline job once before Jenkins registers it to poke on receiving hooks.
Henceforth, the job should automatically trigger on GitHub push events.
I found an answer to this question with scripted pipeline file. We need to declare the Github push event trigger in Jenkins file as follows.
properties([pipelineTriggers([githubPush()])])
node {
git url: 'https://github.com/sebin-vincent/Treasure_Hunt.git',branch: 'master'
stage ('Compile Stage') {
echo "compiling"
echo "compilation completed"
}
stage ('Testing Stage') {
echo "testing completed"
echo "testing completed"
}
stage("Deploy") {
echo "deployment completed"
}
}
}
The declaration should be in the first line.
git url: The URL on which pipeline should be triggered.
branch: The branch on which pipeline should be triggered. When you specify the branch as master and make changes to other branches like develop or QA, that won't trigger the pipeline.
Hope this could help someone who comes here for an answer for the same problem with Jenkins scripted pipeline :-(.
The thing is whenever you create a pipeline job for git push which is to be triggered by github-webhook, first you need to build the pipeline job manually for one time. If it builds successfully, then Jenkins registers it to poke on receiving hooks. And from the next git push, your pipeline job will trigger automatically.
Note: Also make sure that the pipeline job built manually for the first time should be built successfully, otherwise Jenkins will not poke it. If it fails to build, you can never trigger the job again.
I can call another jenkins job using the build command. Is there a way I can tell another job to do a branch scan?
A multibranch pipeline job has a UI button "Scan Repository Now". When you press this button, it will do a checkout of the configured SCM repository and detect all the branches and create subjobs for each branch.
I have a multibranch pipeline job for which I have selected the "Suppress automatic SCM triggering" option because I only want it to run when I call it from another job. Because this option is selected, the multibranch pipeline doesn't automatically detect when new branches are added to the repository. (If I click "Scan Repository Now" in the UI it will detect them.)
Essentially I have a multibranch pipeline job and I want to call it from another multibranch pipeline job that uses the same git repository.
node {
if(env.BRANCH_NAME == "the-branch-I-want" && other_criteria) {
//scanScm "../my-other-multibranch-job" <--- scanScm is a fake command I made up
build "../my-other-multibranch-job/${env.BRANCH_NAME}"
I get an error on that build line, because the target multibranch pipeline job does not yet know that BRANCH_NAME exists. I need a way to trigger an SCM re-scan in the target job from this current job.
Similar to what you figured out yourself, I can contribute my optimization that actually waits until the scan has finished (but is subject to Script Security):
// Helper functions to trigger branch indexing for a certain multibranch project.
// The permissions that this needs are pretty evil.. but there's currently no other choice
//
// Required permissions:
// - method jenkins.model.Jenkins getItemByFullName java.lang.String
// - staticMethod jenkins.model.Jenkins getInstance
//
// See:
// https://github.com/jenkinsci/pipeline-build-step-plugin/blob/3ff14391fe27c8ee9ccea9ba1977131fe3b26dbe/src/main/java/org/jenkinsci/plugins/workflow/support/steps/build/BuildTriggerStepExecution.java#L66
// https://stackoverflow.com/questions/41579229/triggering-branch-indexing-on-multibranch-pipelines-jenkins-git
void scanMultiBranchAndWaitForJob(String multibranchProject, String branch) {
String job = "${multibranchProject}/${branch}"
// the `build` step does not support waiting for branch indexing (ComputedFolder job type),
// so we need some black magic to poll and wait until the expected job appears
build job: multibranchProject, wait: false
echo "Waiting for job '${job}' to appear..."
while (Jenkins.instance.getItemByFullName(job) == null || Jenkins.instance.getItemByFullName(job).isDisabled()) {
sleep 3
}
}
Ended up figuring this out shortly after posting the question. Calling build against the base multibranch pipeline job as opposed to a branch causes it to re-scan. The solution to my above snippet would have ended up looking something like...
node {
if(env.BRANCH_NAME == "the-branch-I-want" && other_criteria) {
build job: "../my-other-multibranch-job", wait: false, propagate: false // scan for branches
sleep 2 // scanning takes time
build "../my-other-multibranch-job/${env.BRANCH_NAME}"
The wait: false is important because otherwise you get "ERROR: Waiting for non-job items is not supported". The multibranch "parent" job is closer to a folder than a job, but it's a folder that supports the build command, and it does so by scanning the SCM.
But solving this just led to another problem, which is that with wait: false we have no way of knowing when the SCM Scan finished. If you have a large repository (or you're short on jenkins agents), the branch won't get discovered until after the second build command has already failed due to the branch not existing. You could bump the sleep time even higher, but that doesn't scale.
Fortunately, it turns out manually initiating the SCM scan isn't even needed if you have github webhooks set up for your jenkins. The branch will be discovered more-or-less instantly, so for my purposes this is solved another way. The reason I was running into it is we don't have webhooks set up in our dev jenkins, but once I move this code to prod it will work fine.
If you're trying to use JobDSL to set up multibranches calling multibranches and you don't have webhooks or something equivalent, the better path is probably to abandon multibranch for your second tier of jobs and use JobDSL to create folders and manage the branch jobs yourself.
We have several jenkins pipeline jobs setup as "pipeline from scm" that checkout a jenkins file from github and runs it. There is sufficient try/catch based error handling inside the jenkinsfile to trap error conditions and notify the right channels.This blog post goes into a quite a bit of depth about how to achieve this.
However, if there is issue fetching the jenkinsfile in the first place, the job fails silently. How does one generate notifications from general job launch failures before the pipeline is even started?
Jenkins SCM pipeline doesn't have any execution provision similar to catch/finally that will be called if Jenkinsfile load is failed, And I don't think there will be any in future.
However there is this global-post-script which runs groovy script after every build of every job on Jenkins. You have to place that script in $JENKINS_HOME/global-post-script/ directory.
Using this you can send notifications or email to admins based on project that failed and/or reason/exceptions of failure.
Sample code that you can put in script
if ("$BUILD_RESULT" != 'SUCCESS') {
def job = hudson.model.Hudson.instance.getItem("$JOB_NAME")
def build = job.getBuild("$BUILD_NUMBER")
def exceptionsToHandle = ["java.io.FileNotFoundException","hudson.plugins.git.GitException"]
def foundExection = build
.getLog()
.split('\n')
.toList()
.stream()
.filter{ line ->
!line.trim().isEmpty() && !exceptionsToHandle.stream().filter{ex -> line.contains(ex)}.collect().isEmpty()
}
.collect()
.size() > 0;
println "do something with '$foundExection'"
}
You can validate your Jenkinsfile before pushing it to repository.
Command-line Pipeline Linter
There are some IDE Integrations as well
Apparently this is an open issue with Jenkins: https://issues.jenkins.io/browse/JENKINS-57946
I have decided not to use Yogesh answer mentioned earlier. For me it is simpler to just copy the content of the Jenkinsfile directly into the Jenkins project instead of pointing Jenkins to the GIT location of the Jenkinsfile. However, in addition I keep the Jenkinsfile in GIT. But make sure to keep the GIT and the Jenkins version identical.
In the old configuration we had 2 jobs, test and build.
The build ran after test had run successfully, but we could manually trigger build if we want to skip the tests.
After we switched to multiple pipeline using Jenkinsfile, we had to put those 2 build jobs in to the same file:
stage('Running tests'){
...
}
stage('Build'){
...
}
So now the build step is only triggered after running tests successfully, and we cannot manually trigger build, without commenting out the test steps and commit to the repository.
I am wondering if there is a better approach/practise to utilise the Jenkinsfile to overcome this limitation?
Using pipeline and Jenkinsfile is becoming the standard and preferred way of running jobs on Jenkins now a days. So using a Jenkinsfile is certainly the way to go.
One way to solve the problem is to make the job parameterized:
// Set the parameter properties, this will be done at the first run so that we can trigger with parameters manually
properties([parameters([booleanParam(defaultValue: true, description: 'Testing will be done if this is checked', name: 'DO_TEST')])])
stage('Running tests'){
// Putting the check inside of the stage step so that we don't confuse the stage view
if (params['DO_TEST']) {
...
}
}
stage('Build'){
...
}
The first time the job runs, it will add a parameter to the job. After that we can trigger manually and select whether tests should run. The default value will be used when it's triggered by SCM.