Establish relationship between two Jenkins Jobs available on different Jenkins server - jenkins

I am building Jenkins for Test / QA automation scripts, lets name it TEST_JOB. For application, I have application source code Jenkins build, name it DEV_JOB.
My scenario is when DEV_JOB completes execution (successfully), execute TEST_JOB immediately. I am aware about setting up project upstream / downstream [ Build after other projects are built ] to accomplish this task. But here, Problem is DEV_JOB is on different server than TEST_JOB. Due to which, TEST_JOB fails to recognize DEV_JOB.
Now, how would I achieve this scenario?

You can use Jenkins API for remote trigger of Job.
Say you have job on DEV_JOB on JENKINS_1, add a penultimate step(or upstream/downstream project having only this step) which invokes TEST_JOB using remote API call of JENKINS_2 server.
Example command would be
$(curl --user "username:password" "http://JENKINS_2/job/TEST_JOB/buildWithParameters?SOMEPARAMETER=$SOMEPARAMETER")
username:password is a valid user on JENKINS_2.
Avoid using your own account here but rather a 'build trigger' account that only has permissions to start those jobs.

Related

Trigger another jenkins job execute shell

How to trigger a jenkins job from another freestyle job execute shell section? For pipeline jobs we have groovy which makes it easy.
Two ways to do.
1 - Add a post-build step to build another project
2 - Create a user in jenkins to build remotely jobs (example remoteUser). Create a API TOken for this user. Edit the project to be triggered and make the option “Trigger Builds Remotely”, add an authentication token (IT'S NOT THE TOKEN OF THE USER, YOU CAN GENERATE A NEW FOR THIS).
Now can just use curl to start the job
curl -X POST HTTP://remoteUser:{remoteUser_API_TOKEN}#jenkins.domain/project/my_project/build?token=TOKEN

JMeter & Jenkins - passing jmeter parameters to downstream build

The Setup - A jenkins job using jenkins parameters testApp and testEnv. The Execution Batch looks like this:
C:\jmeter\apache-jmeter-3.2\bin\jmeter.bat -n -t
C:\JMeter\Scripts\API_scripts\%testApp%.jmx -Jtestenv=%testEnv% -JtestApp=%testApp% -JtestBrowser=NA -l
C:\AUTO_Results\jtl\%testApp%_%testEnv%.jtl
Post-build Actions
Console output (build lob) parsing with a global rule so that the Failures that are logged in the Jenkins Console window will consider the JMeter script failing. (discussed Jenkins shows JMeter script failure even though the script actually passed)
Triggered parameterized build - this is a separate jmeter script that updates a wiki page with either PASS/FAIL and uploads the JMeter report.
The Issue - How do I get the downstream Triggered build to use the parameters from the upstream script? I set the Parameter = Current build parameters but it's not applying those. Also, I wont know the value of the testResult parameter until the upstream build finishes. I tried adding %testResult%=PASS to the 'Predefined parameters' box
As per Parameterized Trigger Plugin page:
The parameters section can contain a combination of one or more of the following:
a set of predefined properties
properties from a properties file read from the workspace of the triggering build
the parameters of the current build
Subversion revision: makes sure the triggered projects are built with the same revision(s) of the triggering build. You still have to make sure those projects are actually configured to checkout the right Subversion URLs.
Restrict matrix execution to a subset: allows you to specify the same combination filter expression as you use in the matrix project configuration and further restricts the subset of the downstream matrix builds to be run.
So you basically need to copy over the parameters you would like to have in the "downstream" job from the current one.
As a workaround to current performance plugin limitations you can consider running JMeter using Taurus tool as a wrapper, it has flexible and powerful pass/fail criteria subsystem which will basically return to Jenkins non-zero exit code triggering build failure in case of issue in the test. If everything goes well Taurus exit code will be 0 which is considered successful by Jenkins. Check out How to Run Taurus with the Jenkins Performance Plugin article for more details.

How to build with parameters in Jenkins from Gitlab push?

I have GitLab Community Edition 8.15.2 successfully trigger pipeline projects in Jenkins 2.32.1 using a webhook. I want the gitlab push to trigger a build with parameters but the parameter value is null when it comes through so the build fails.
The gitlab webhook looks like:
http://jenkins.server:8080/project/project-a/buildWithParameters?MYPARAM=foo
In my pipeline project I echo the parameter value out with
echo "MYPARAM: ${MYPARAM}"
and it's not set to anything. Any ideas on where I've gone wrong?
UPDATE
The actual code I'm using in the pipeline is:
node {
try {
echo "VM_HOST: ${VM_HOST}"
echo "VM_NAME: ${VM_NAME}"
stage('checkout') {
deleteDir()
git 'http://git-server/project/automated-build.git'
}
stage('build') {
bat 'powershell -nologo -file Remove-MyVM.ps1 -VMHostName %VM_HOST% -VMName "%VM_NAME%" -Verbose'
}
...
}
}
The parameter VM_HOST has a default value but VM_NAME doesn't. In my Console output in Jenkins I can see:
[Pipeline] echo
VM_HOST: HyperVHost
[Pipeline] echo
VM_NAME:
I have been struggling with this for weeks. I had it working once, but I couldn't get it to work again, untill today. And the solution was mindblowingly obvious ofcourse...
Automatically for each pipeline job I ticked the following box:
Build when a change is pushed to GitLab. GitLab CI Service URL:
http://jenkins.dev:8080/project/MyProject
Then from GitLab I used the webhook to trigger the above.
Like you I tried to add /buildWithParameters and tried many other things that didn't work.
The problem was, I ticked the wrong checkbox!
Since I trigger the build from a GitLab webhook, the above checkbox (build when a...) does not have to be checked at all.
What needs to be checked is:
Trigger builds remotely (e.g., from scripts)
That checkbox provides you with a new URL:
Use the following URL to trigger build remotely:
JENKINS_URL/job/MyProject/build?token=TOKEN_NAME or
/buildWithParameters?token=TOKEN_NAME
Like all the documentation I came along states and as you can see, the URL now no longer starts with /project, but with /job instead!
So tick that box and change your URL accordingly:
http://jenkins.server:8080/**job**/project-a/buildWithParameters?token=TOKEN_NAME&MYPARAM=foo
Least I want to mention the token:
In the GitLab webhook there is a seperate field for "token", which states:
Use this token to validate received payloads. It will be sent with the request in the X-Gitlab-Token HTTP header.
So, the token provided there will be sent along the request as a HTTP header.
This is the token which can be provided globally in the Jenkins setup.
The token you must provide in the Jenkins job when ticking the box Use the following URL to trigger build remotely must be send in the URL as GET parameter, just like the example shows.
Final note: personally I have never got this working completely, because I don't get the Jenkins CSRF protection off my back. Disabling it gives me another error. However, hopefully the above does fix the problem for you and others.
GitLab plugin does not allow you to pass arbitrary parameters. In their project there is an open issue for it that deserves to be upvoted.
My convoluted solution was to use the desired values for the push trigger as the default parameters of the job. Then I used the Parameterized Scheduler plugin to use other values in the scheduled executions.
The problem is that I got a bad usability for the job when it was manually run, since the default parameters were appropriate for the push hook.
I found the solution here https://www.jittagornp.me/blog/jenkins-gitlab-webhook/
I verified it with Jenkins 2.263.1 and GitLab Community Edition 13.6.1
Your webhook url will look like
https://hunter:11a403302a4f01b9b4975c0ac27441a5cc#jenkinsservername.com/job/yourjenkinsproject/buildWithParameters?token=Aju9ryHUu6t7W8wLSeCWtY2bWjzQduYNPyY7B3gs&yourparam=yourvalue
"hunter" ist your username in Jenkins.
The following is the Jenkins API Token you have to create in your Jenkins User Managment independent of the project.
The last Token is the one you specify in the jenkins project options under "Trigger builds remotely (e.g., from scripts)"
The last thing is to add your Parameter and value to the url with &param=value

Jenkins - Upstream project dependency issue

Here is something I want to achieve:
I have a jenkins project which has 4 upstream projects. But I don't want to trigger this project when the upstream jobs are done building, but I want the trigger the project via remote API, which then waits on upstream projects until they are done building, if these projects are building.
Lets say all the 4 upstream projects can build the source code from any branch passed via API, but I want the downstream project to start only when a specific branch is passed to these upstream projects.
Scenario:
Lets say I have two clusters A and B, for the sake of this question, I want to deploy my code to cluster A, i.e front end and backend code. Now I have a project to build front end and 1 project to build backend (these two projects can build code for cluster A and B, based on the branch passed). Now, I have two deploy projects for cluster A which will deploy front end and backend. So, when I pass a branch to build code for cluster A, it will trigger the build projects. But now I only want these two deploy projects to start when this specific branch was passed.
If you want to control the builds remotely then use the Jerkins cli - I have found it very useful http://jenkinshost:8080/cli
You need to get the ssh key config right, add the public key of the user running the cli to the user you want to run the job in Jenkins using the Jenkins user configuration (not on the command line
Test key setup with
java -jar jenkins=cli.jar -s http://jenkinshost:8080 who-am-i
This should then report which user will be used to run the build in Jenkins
But I think you can use the Conditional Build Step plugin for your problem
https://wiki.jenkins-ci.org/display/JENKINS/Conditional+BuildStep+Plugin
This will allow you to put a conditional wrapper around a build step i.e.
if branch==branchA then
trigger step - deploy to clusterA
if branch==branchB then
trigger step - deploy to clusterB
Personally I find this plugin a bit clunky and it makes the job config page a little messy
Another solution I came up with was to always call the child job and then let it decide if it runs.
So I have a script step at the start of the child job to see if it should run
if [${branch}="Not the right branch name" ] ; then
echo "EXIT_GREEN"
exit 1
fi
You have now failed this job which would cause the parent job to go red but by using the Groovy Postbuild plugin https://wiki.jenkins-ci.org/display/JENKINS/Groovy+Postbuild+Plugin you can add a post build step like this
if (manager.logContains(".*EXIT_GREEN.*")) {
manager.addBadge("info.gif","This job had nothing to do")
manager.build.#result = hudson.model.Result.SUCCESS
}
Child job has run green (with an info icon against the build) but has actually not done anything. Obviously if the branch is one you want deploy then the first script step does not run the exit 1 and the job continues as normal

How do I make a Jenkins job start after multiple simultaneous upstream jobs succeed?

In order to get the fastest feedback possible, we occasionally want Jenkins jobs to run in Parallel. Jenkins has the ability to start multiple downstream jobs (or 'fork' the pipeline) when a job finishes. However, Jenkins doesn't seem to have any way of making a downstream job only start of all branches of that fork succeed (or 'joining' the fork back together).
Jenkins has a "Build after other projects are built" button, but I interpret that as "start this job when any upstream job finishes" (not "start this job when all upstream jobs succeed").
Here is a visualization of what I'm talking about. Does anyone know if a plugin exists to do what I'm after?
Edit:
When I originally posted this question in 2012, Jason's answer (the Join and Promoted Build plugins) was the best, and the solution I went with.
However, dnozay's answer (The Build Flow plugin) was made popular a year or so after this question, which is a much better answer. For what it's worth, if people ask me this question today, I now recommend that instead.
Pipeline plugin
You can use the Pipeline Plugin (formerly workflow-plugin).
It comes with many examples, and you can follow this tutorial.
e.g.
// build
stage 'build'
...
// deploy
stage 'deploy'
...
// run tests in parallel
stage 'test'
parallel 'functional': {
...
}, 'performance': {
...
}
// promote artifacts
stage 'promote'
...
Build flow plugin
You can also use the Build Flow Plugin. It is simply awesome - but it is deprecated (development frozen).
Setting up the jobs
Create jobs for:
build
deploy
performance tests
functional tests
promotion
Setting up the upstream
in the upstream (here build) create a unique artifact, e.g.:
echo ${BUILD_TAG} > build.tag
archive the build.tag artifact.
record fingerprints to track file usage; if any job copies the same build.tag file and records fingerprints, you will be able to track the parent.
Configure to get promoted when promotion job is successful.
Setting up the downstream jobs
I use 2 parameters PARENT_JOB_NAME and PARENT_BUILD_NUMBER
Copy the artifacts from upstream build job using the Copy Artifact Plugin
Project name = ${PARENT_JOB_NAME}
Which build = ${PARENT_BUILD_NUMBER}
Artifacts to copy = build.tag
Record fingerprints; that's crucial.
Setting up the downstream promotion job
Do the same as the above, to establish upstream-downstream relationship.
It does not need any build step. You can perform additional post-build actions like "hey QA, it's your turn".
Create a build flow job
// start with the build
parent = build("build")
parent_job_name = parent.environment["JOB_NAME"]
parent_build_number = parent.environment["BUILD_NUMBER"]
// then deploy
build("deploy")
// then your qualifying tests
parallel (
{ build("functional tests",
PARENT_BUILD_NUMBER: parent_build_number,
PARENT_JOB_NAME: parent_job_name) },
{ build("performance tests",
PARENT_BUILD_NUMBER: parent_build_number,
PARENT_JOB_NAME: parent_job_name) }
)
// if nothing failed till now...
build("promotion",
PARENT_BUILD_NUMBER: parent_build_number,
PARENT_JOB_NAME: parent_job_name)
// knock yourself out...
build("more expensive QA tests",
PARENT_BUILD_NUMBER: parent_build_number,
PARENT_JOB_NAME: parent_job_name)
good luck.
There are two solutions that I have used for this scenario in the past:
Use the Join Plugin on your "deploy" job and specify "promote" as the targeted job. You would have to specify "Functional Tests" and "Performance Tests" as the joined jobs and start them via in some fashion, post build. The Parameterized Trigger Plugin is good for this.
Use the Promoted Builds Plugin on your "deploy" job, specify a promotion that works when downstream jobs are completed and specify Functional and Performance test jobs. As part of the promotion action, trigger the "promote" job. You still have to start the two test jobs from "deploy"
There is a CRITICAL aspect to both of these solutions: fingerprints must be correctly used. Here is what I found:
The "build" job must ORIGINATE a new fingerprinted file. In other words, it has to fingerprint some file that Jenkins thinks was originated by the initial job. Double check the "See Fingerprints" link of the job to verify this.
All downstream linked jobs (in this case, "deploy", "Functional Tests" and "Performance tests") need to obtain and fingerprint this same file. The Copy Artifacts plugin is great for this sort of thing.
Keep in mind that some plugins allow you change the order of fingerprinting and downstream job starting; in this case, the fingerprinting MUST occur before a downstream job fingerprints the same file to ensure the ORIGIN of the fingerprint is properly set.
The Multijob plugin works beautifully for that scenario. It also comes in handy if you want a single "parent" job to kick off multiple "child" jobs but still be able to execute each of the children manually, by themselves. This works by creating "phases", to which you add 1 to n jobs. The build only continues when the entire phase is done, so if a phase as multiple jobs they all must complete before the rest are executed. Naturally, it is configurable whether the build continues if there is a failure within the phase.
Jenkins recently announced first class support for workflow.
I believe the Workflow Plugin is now called the Pipeline Plugin and is the (current) preferred solution to the original question, inspired by the Build Flow Plugin. There is also a Getting Started Tutorial in GitHub.
Answers by jason & dnozay are good enough. But in case someone is looking for easy way just use JobFanIn plugin.
This diamond dependency build pipeline could be configured with
the DepBuilder plugin. DepBuilder is using its own domain
specific language, that would in this case look like:
_BUILD {
// define the maximum duration of the build (4 hours)
maxDuration: 04:00
}
// define the build order of the existing Jenkins jobs
Build -> Deploy
Deploy -> "Functional Tests" -> Promote
Deploy -> "Performance Tests" -> Promote
After building the project, the build visualization will be shown on the project dashboard page:
If any of the upstream jobs didn't succeed, the build will be automatically aborted. Abort behavior could be tweaked on a per job basis, for more info see the DepBuilder documentation.

Resources