How to integrate Jobs in Jenkins using Pipeline - jenkins

I have Four Jobs- A,B,C,D
A- Build
B- Test
C- Sonar Analysis
D- Deploy
My scenario-
1- I need to create a Pipeline
A->B->C
2- I Need to create a other Pipeline
A->B->D
My issue is--
1- If I select "Trigger Parameterized builds on other projects " and add Job B under Job A, I can't use Job A for my second scenario.
How should I use Job A for both the pipelines without effecting.

If I understand the question correctly, you are looking to create two pipelines. In the pipelines, you can define which jobs to build using stages.
For your requirement, you need to create two pipelines and define stages according to your needs. Trigger Parameterized builds on other projects is not a suitable option for you.
stage('Build A') {
build job: 'A' , parameters: <Give_your_own_parameters>
}
stage('Build B') {
build job: 'B' , parameters: <Give_your_own_parameters>
}
stage('Build C') {
build job: 'C' , parameters: <Give_your_own_parameters>
}
You can also get the syntax from Pipeline Syntax in the Pipeline Section of the pipeline you are building.

You can ease the process by creating a duplicate project of A, and B. Project can easily be duplicated during New Item> Copy details from Project A.

This is not easy feasible for a simple reason : you are not using Jenkins jobs the way they were meant to be used.
The concept of a job in Jenkins is that a job is a sequence of actions. A job does not have a single responsibility such as just building, just testing or just deploying. In your case, you should have 3 "actions" in a job, and 3 "actions" in another job.
The freestyle job approach
The common approach would be something like :
Build job
Action 1 : build
Action 2 : test
Action 3 : run Sonar analysis
Deploy job
Action 1 : build
Action 2 : test
Action 3 : deploy
Unless I'm missing something here, you don't want to separate these 3/4 actions in 4 separate jobs, as this would be highly inneficient. For example, test phase and Sonar analysis should probably be run just after a code build has been made, so you want to share the same workspace to be able to test your built code.
The pipeline approach
Another - preferred - approach would be to use actual Jenkins pipelines, i.e. Groovy scripts that will allow you to define your steps as functions and then reuse them in both your "Build job" and "Deploy job".
As an example, you could have a functions.groovy containing your build/test functions :
functions.groovy
def build() {
// Build code here...
}
def test() {
// Test code here...
}
build-job.groovy
node {
def functions = load 'functions.groovy'
stage('Build') {
functions.build()
}
stage('Test') {
functions.test()
}
stage('Sonar Analysis') {
// Sonar analysis code...
}
}
deploy-job.groovy
node {
def functions = load 'functions.groovy'
stage('Build') {
functions.build()
}
stage('Test') {
functions.test()
}
stage('Deploy') {
// Deploy code...
}
}

Related

Executing multiple jenkins jobs

Can someone please let me know how i can trigger multiple Jenkins jobs? We have 5-6 jobs that we need to trigger manually post some technical upgrades. We are having to navigate to these jobs manually and click on 'Build'. Is there anyway i can create a new job or shell script that will help me in triggering all of these jobs with a single click/run.
Simplest approach will be creating a new pipeline which will trigger other pipelines.
You can also make stages parallel to faster execution if order is not issue.
You can find detailed answer and approach here
Yes, you can do it in both ways that you mentioned. Creating a new job that will run all these other jobs as downstream jobs is the fastest and easiest.
To do this, create a new job, and if you are using declarative Jenkins, then specify the jobs to be triggered:
stage ('trigger-multiple-jobs') {
parallel {
stage ('first job') {
steps {
build([
job : 'JobName',
wait : false,
parameters: [
string(name: 'PARAM_1', value: "${PARAM_1}")
]
])
}
}
stage ('second job') {
steps {
build([
job : 'JobName2',
wait : false
])
}
}
}
}
You can alternatively create a freestyle job, navigate down to the Trigger downstream jobs section, and set the jobs you want to be triggered from the drop down.
If you want to use a shell script, then you can trigger the jobs using API calls. This is described well in this answer.

Jenkins Parameterized Project For Multiple Parameters

I have Jenkins Pipeline which is triggering for different projects. However the only difference in all the pipelines is just the name.
So I have added a parameter ${project} in parameter of jenkins and assigned it a value of the name of the project.
We have a number of projects and I am trying to find a better way through which I can achieve this.
I am thinking how can we make the parameter run with different parameters for all the projects without actually creating different projects under jenkins.
I am pasting some screenshot for you to understand what exactly I want to achieve.
As mentioned here, this is a radioserver project, having a pipeline which has ${project} in it.
How can I give multiple values to that {project} from single jenkins job?
IF you have any doubts please message me or add a comment.
You can see those 2 projects I have created, it has all the contents same but just the parameterized value is different, I am thinking how can I give the different value to that parameter.
As you can see the 2 images is having their default value as radioserver, nrcuup. How can I combine them and make them run seemlessly ?
I hope this will help. Let me know if any changes required in answer.
You can use conditions in Jenkins. Based on the value of ${PROJECT}, you can then execute the particular stage.
Here is a simple example of a pipeline, where I have given choices to select the value of parameter PROJECT i.e. test1, test2 and test3.
So, whenever you select test1, jenkins job will execute the stages that are based on test1
Sample pipeline code
pipeline {
agent any
parameters {
choice(
choices: ['test1' , 'test2', 'test3'],
description: 'PROJECT NAME',
name: 'PROJECT')
}
stages {
stage ('PROJECT 1 RUN') {
when {
expression { params.PROJECT == 'test1' }
}
steps {
echo "Hello, test1"
}
}
stage ('PROJECT 2 RUN') {
when {
expression { params.PROJECT == 'test2' }
}
steps {
echo "Hello, test2"
}
}
}
}
Output:
when test1 is selected
when test2 is selected
Updated Answer
Yes, it is possible to trigger the job periodically with a specific parameter value using the Jenkins plugin Parameterized Scheduler
After you save the project with some parameters (like above mentioned pipeline code), go back again to the Configure and under Build Trigger, you can see the option of Build periodically with parameters
Example:
I will here run the job for PROJECT=test1 every even minutes and PROJECT=test2 every uneven minutes. So, below is the configuration
*/2 * * * * %PROJECT=test1
1-59/2 * * * * %PROJECT=test2
Please change the crontab values according to your need
Output:

How can I parameterize Jenkinsfile jobs

I have Jenkins Pipeline jobs, where the only difference between the jobs is a parameter, a single "name" value, I could even use the multibranch job name (though not what it's passing as JOB_NAME which is the BRANCH name, sadly none of the envs look suitable without parsing). It would be great if I could set this outiside of the Jenkinsfile, since then I could reuse the same jenkinsfile for all the various jobs.
Add this to your Jenkinsfile:
properties([
parameters([
string(name: 'myParam', defaultValue: '')
])
])
Then, once the build has run once, you will see the "build with parameters" button on the job UI.
There you can input the parameter value you want.
In the pipeline script you can reference it with params.myParam
Basically you need to create a jenkins shared library example name myCoolLib and have a full declarative pipeline in one file under vars, let say you call the file myFancyPipeline.groovy.
Wanted to write my examples but actually I see the docs are quite nice, so I'll copy from there. First the myFancyPipeline.groovy
def call(int buildNumber) {
if (buildNumber % 2 == 0) {
pipeline {
agent any
stages {
stage('Even Stage') {
steps {
echo "The build number is even"
}
}
}
}
} else {
pipeline {
agent any
stages {
stage('Odd Stage') {
steps {
echo "The build number is odd"
}
}
}
}
}
}
and then aJenkinsfile that uses it (now has 2 lines)
#Library('myCoolLib') _
evenOrOdd(currentBuild.getNumber())
Obviously parameter here is of type int, but it can be any number of parameters of any type.
I use this approach and have one of the groovy scripts that has 3 parameters (2 Strings and an int) and have 15-20 Jenkinsfiles that use that script via shared library and it's perfect. Motivation is of course one of the most basic rules in any programming (not a quote but goes something like): If you have "same code" at 2 different places, something is not right.
There is an option This project is parameterized in your pipeline job configuration. Write variable name and a default value if you wish. In pipeline access this variable with env.variable_name

Run Parts of a Pipeline as Separate Job

We're considering using the Jenkins Pipeline plugin for a rather complex project consisting of several deliveries that need to be build using different tools (on different machines) before being merged. Still, it seems to be easy enough to do a complete build with a single Jenkinsfile, and I like the automatic discovery of git branches that comes with Pipeline.
However, at this point, we have jobs for each of the deliveries and use a build-flow based "meta" job to orchestrate the individual jobs. The nice thing about this is that it also allows starting just one individual job if only small changes were made, just to see whether this delivery still compiles.
To emulate this, some ideas came to mind:
Use different Jenkinsfiles for the deliveries and load them in the top-level Jenkinsfile; it seems that the Multibranch Pipeline job does not allow configuring the Jenkinsfile to use yet (https://issues.jenkins-ci.org/browse/JENKINS-35415), however, so creating the jobs for the individual deliveries is still open.
Provide a configuration option for the "top-level" job and have ifs for all deliveries in the Jenkinsfile to be able to select which should be build. This would mix different build types in one pipeline, though, and, at the very least, mess up the estimation of the build time.
Are those viable options, or is there a better one?
What you could do is to write a pipelining script that has has "if"-guards around the single stages, like this:
stage "s1"
if (theStage in ["s1","all"]) {
sleep 2
}
stage "s2"
if (theStage in ["s2", "all"]) {
sleep 2
}
stage "s3"
if (theStage in ["s3", "all"]) {
sleep 2
}
Then you can make a "main" job that uses this script and runs all stages at once by setting the parameter "theStage" to "all". This job will collect the statistics when all stages are run at once and give you useful estimation times.
Furthermore, you can make a "partial run" job that uses this script and that is parametrized with the stage that you want to run. The estimation will not be very useful, though.
Note that I put the stage itself to the main script and put only the execution code into the conditional, as suggested by Martin Ba. This makes sure that the visualization of the job is more reliable
As an expansion of the previous answer, I would propose something like that:
def stageIf(String name, Closure body) {
if (params.firstStage <= name && params.lastStage >= name) {
stage(name, body)
} else {
stage(name) {
echo "Stage skipped: $name"
}
}
}
node('linux') {
properties([
parameters([
choiceParam(
name: 'firstStage',
choices: '1.Build\n' +
'2.Docker\n' +
'3.Deploy',
description: 'First stage to start',
defaultValue: '1.Build',
),
choiceParam(
name: 'lastStage',
choices: '3.Deploy\n' +
'2.Docker\n' +
'1.Build',
description: 'Last stage to start',
defaultValue: '3.Deploy',
),
])
])
stageIf('1.Build') {
// ...
}
stageIf('3.Deploy') {
// ...
}
}
Not as perfect as I wish but at least its working.

Aggregating results of downstream parameterised jobs in Jenkins

I have a Jenkins Build job which triggers multiple Test jobs with the test name as a parameter using the Jenkins Parameterized Trigger Plugin. This kicks off a number of test builds on multiple executors which all run correctly.
I now want to aggregate the results using 'Aggregate downstream test results->Automatically aggregate all downstream tests'. I have enabled this in the Build job and have set up fingerprinting so that these are recognised as downstream jobs. In the Build jobs lastBuild page I can see that they are recognised as downstream builds:
Downstream Builds
Test #1-#3
When I click on "Aggregated Test Results" however it only shows the latest of these (Test #3). This may be good behaviour if the job always runs the same tests but mine all run different parts of my test suite.
Is there some way I can get this to aggregate all of the relevant downstream Test builds?
Additional:
Aggregated Test Results does work if you replicate the Test job. This is not ideal as I have a large number of test suites.
I'll outline the manual solution (as mentioned in the comments), and provide more details if you need them later:
Let P be the parent job and D be a downstream job (you can easily extend the approach to multiple downstream jobs).
An instance (build) of P invokes D via Parameterized Trigger Plugin via a build step (not as a post-build step) and waits for D's to finish. Along with other parameters, P passes to D a parameter - let's call it PARENT_ID - based on P's build's BUILD_ID.
D executes the tests and archives them as artifacts (along with jUnit reports - if applicable).
P then executes an external Python (or internal Groovy) script that finds the appropriate build of D via PARENT_ID (you iterate over builds of D and examine the value of PARENT_ID parameter). The script then copies the artifacts from D to P and P publishes them.
If using Python (that's what I do) - utilize Python JenkinsAPI wrapper. If using Groovy - utilize Groovy Plugin and run your script as system script. You then can access Jenkins via its Java API.
I came up with the following solution using declarative pipelines.
It requires installation of "copy artifact" plugin.
In the downstream job, set "env" variable with the path (or pattern path) to result file:
post {
always {
steps {
script {
// Rem: Must be BEFORE execution that may fail
env.RESULT_FILE='Devices\\resultsA.xml'
}
xunit([GoogleTest(
pattern: env.RESULT_FILE,
)])
}
}
}
Note that I use xunit but the same apply with junit
In the parent job, save build variables, then in post process I aggregate results with following code:
def runs=[]
pipeline {
agent any
stages {
stage('Tests') {
parallel {
stage('test A') {
steps {
script {
runs << build(job: "test A", propagate: false)
}
}
}
stage('test B') {
steps {
script {
runs << build(job: "test B", propagate: false)
}
}
}
}
}
}
post {
always {
script {
currentBuild.result = 'SUCCESS'
def result_files = []
runs.each {
if (it.result != 'SUCCESS') {
currentBuild.result = it.result
}
copyArtifacts(
filter: it.buildVariables.RESULT_FILE,
fingerprintArtifacts: true,
projectName: it.getProjectName(),
selector: specific(it.getNumber().toString())
)
result_files << it.buildVariables.RESULT_FILE
}
env.RESULT_FILE = result_files.join(',')
println('Results aggregated from ' + env.RESULT_FILE)
}
archiveArtifacts env.RESULT_FILE
xunit([GoogleTest(
pattern: env.RESULT_FILE,
)])
}
}
}
Note that the parent job also set the "env" variable so it can itself be aggregated by a parent job.

Resources