I am passing Extended Choice Parameters from one job to another job, in the second job I am writing a groovy script to receive the parameter, and on basis of that parameter job must run multiple times in parallel. But there is no method available to build jobs in groovy.
Use build job from Jenkins Pipeline
build job: 'jobName',
parameters:[[$class: 'StringParameterValue', name: 'val1', value: '1' ],
[$class: 'LabelParameterValue', name: 'SLAVE_NODE', label: 'slavename']
]
The jenkins-pipeline that you added to your job is probably what you are searching for. With pipelines, you can define your build using a Groovy DSL.
You find an introduction in the documentation. A (incomplete) list of steps available through plugins can be found in the steps reference.
P.S. Be warned that there are two different flavors: declarative pipelines (defined using the pipeline keyword) do not offer full freedom, but are a bit easier to handle regarding build failures and parse errors in your pipeline code. Scripted pipelines (with node steps allocating an executor) offer (nearly) the full power of Groovy.
Related
I have Two jobs like one is CI jod & another one is CD. I want CI build number should use on CD number.. Can you please help me with declarative pipeline script to get build number as a parameter. here CI job is calling CD job.
Jenkins already provides a simple means to access the number of the current build using env.BUILD_NUMBER. So if you wanted to pass the build number of CI to the downstream job CD, you could do
build([
job : 'CD',
parameters: [
string(name: 'MAIN_BUILD_NUMBER', value: "${env.BUILD_NUMBER}")
]
])
Then in the CD job, declare a parameter like this:
parameters {
string(defaultValue: null, description: 'Build No', name: 'MAIN_BUILD_NUMBER')
}
You should then be able to use ${env.MAIN_BUILD_NUMBER} anywhere in your CD jobs' Jenkinsfile.
I have a parameterized pipeline project with active choices parameter, where choice list is dynamically populated by groovy script. I need to retrieve and use current job name in the script. The following line works for me in Freestyle Projects:
def jobName = this.binding.jenkinsProject.name
However when I try to use it in Pipeline Project I get:
No such property: jenkinsProject for class: groovy.lang.Binding
In Retrieving Jenkins job name using groovy script in Active Choice Reactive Parameter it's stated that this has been resolved in Active Choices plugin v1.4. I'm using version 2.2.1 and the issue still persists. Is this property not available in Pipeline Project? Is there a workaround or an alternative?
If you trying to get the current build job name inside running job
There is a builtin env variable for it:
JOB_BASE_NAME
You can see list of available env variable in your Jenkins at
http://{hostname}/job/{jobname}/pipeline-syntax/globals
Jest replace hostname with your Jenkins address and jobname with some job you have in your Jenkins.
My Jenkins
Jenkins version: 2.176.2
Active Choices Plug-in: 2.2.1
Worked with pipeline job.
If you trying to do it in the parameters script I'm not sure that possible.
I faced the same issue and came up with a solution.
You can use Jenkins pipeline currentBuild variable, more about its properties - open page jenkins-server-url/jenkins/pipeline-syntax/globals on your Jenkins server
String currentJobParentFolderName = currentBuild.fullProjectName.split('/')[0]
String currentJobName = currentBuild.projectName
String paramValue = getParamValue(currentJobName)
properties([
buildDiscarder(logRotator(daysToKeepStr: '10')),
disableResume(),
parameters([
[$class: 'CascadeChoiceParameter',
name: 'PARAM',
choiceType: 'PT_SINGLE_SELECT',
description: 'param1',
filterLength: 1,
filterable: false,
randomName: 'choice-parameter-99999999999',
script: [
$class: 'GroovyScript',
fallbackScript: [
classpath: [],
sandbox: false,
script:
'return ["Failed to get values"]'
],
script: [
classpath: [],
sandbox: false,
script:
"""return ['$paramValue']"""
]
]
]
])
])
timestamps {
job's main logic
}
private static def getParamValue(String jobName) {
Map paramValMapping = [
'jobName1': 'value1',
'jobName2': 'value2',
]
String paramValue = paramValMapping.get(jobName)
if (!paramValue) {
throw new Exception("Failed to get value")
}
return paramValue
}
currentBuild.fullProjectName - job name including upper level folders (I needed exactly this)
currentBuild.projectName - just a job name
Unfortunately I didn't manage to place all this logic inside of CascadeChoiceParameter script section.
Also I needed only one value, but this approach can be used for a list of values as well, just don't forget about quotes for string values.
Pay attention that such script changes may require script approve from Jenkins admin in jenkins/scriptApproval for EACH incoming value paramValue
So for everyone who's still looking for a solution to this problem, there are actually three ways you could go about solving this.
The reason why the jenkinsProject variable is not available in Pipeline jobs is due to an issue with the active choices plugin itself. You can read more about this in the following ticket: https://issues.jenkins.io/browse/JENKINS-29407. There's a separate feature branch in the Active Choices project on GitHub that (kinda) solves this issue: https://github.com/jenkinsci/active-choices-plugin/tree/pipeline-support. It wasn't merged into master (and it looks like it never will be), because adding this to Pipeline jobs breaks this functionality for the Freestyle jobs. What you can do is you can compile the plugin from the source yourself and install it on your Jenkins. Pipeline jobs will have the binding variables available in their active choice parameters, however Freestyle jobs will no longer have them. Only use this if the following two options are for some reason not possible in your case.
You can use the properties{} step to configure job parameters from the pipeline run. During pipeline run you could simply use the JOB_BASE_NAME environment variable as an expression of the GString that you'd pass as a script text. The only disadvantage here is that the parameters will only become available after the first build. So if you have a new job you'd need to trigger it once before it becomes parameterized.
You could just use the input() step. Your job won't be parametrized, but users will be able to provide their input to the build during its run. It's not very convenient if you need it to be triggered by some other job or an external webhook, but for cases where the job is expected to only be triggered manually it's probably the best option.
I am creating a Jenkins job using grovvy pipeline scripts (I am new at this). I am stuck at a place where I want to trigger another Job with some build options set.
Basically, without grovvy pipeline script, I can do above (as shown in picture) using Parameterized Trigger Plugin and it provides me useful variables like ${TRIGGERED_BUILD_NUMER_} (as shown in the picture, I am triggering job named Another-Job) and I can also set options like "Block until the triggered projects to finish their builds" and the options below them (as shown in the picture)
I, actually, don't know how to do this using pipeline script. Can someone help me in this or point me to the appropriate documentations?
Thanks in advance!
You can use the build step that does exactly that :
build job: 'Another-Job', parameters: [
[$class: 'StringParameterValue', name: 'operation', value: "${OPERATION}" ],
[$class: 'StringParameterValue', name: 'beanstalk_application_version', value: "${TRIGGERED_BUILD_NUMBER_ANother-Job}-{GIT-COMMIT}" ]]
2 things worth noting :
The "block until triggered project is finished" is the default option of this build step, and this step also propagate any downstream error by default. You can use propagate and wait params if you want do deactivate this default behaviour.
Environment variables or Groovy-defined variables are all available with the same notation, as they would have been available with your freestyle triggering job. Just make sure you use double quotes and not simple quotes around your variables, otherwise the variables won't be interpreted and replaced when triggering downstream jobs.
To build a job with default settings simply write:
build 'Another-Job'
To build a job with parameters:
build job: 'Another-Job', parameters: [string(name: 'some-param-name', value: 'some-param-default-value')]
In general to write pipeline code I suggest you work closely with the pipeline-syntax documentation provided by any running jenkins at:
http://my-jenkins-url/job/my-job-name/pipeline-syntax/
When having build job with implemented promotion cycle, i.e. Dev->QA->Performance->Production.
What will be the correct way to migrate this cycle into pipeline? it looks rather clean\structured to call each of the above mentioned jobs, Yet, How can I query the build ID (to be able to call the deployment job)? Or I have have totally misunderstood the pipeline concept?
You might consider multiple solutions :
Trigger each job sequentially
Just call each job sequentially using build step :
node() {
stage "Dev"
build job: 'Dev'
stage "QA"
build job: 'QA'
// Your other promotion cycles...
}
It is easy to use and will probably be already compliant with your actual solution, but I'm not a big fan of this solution because the actual output of your pipeline stages (Dev, QA, etc.) will really be in the dedicated job (Dev job, QA job) instead of being directly inside your pipeline. Your pipeline will be an empty shell just calling other jobs...
Call pipelines functions instead of jobs
Define a pipeline function for each of your promotion cycle (preferably in an external file) and then call each function sequentially. Example :
node {
git 'http://urlToYourGit/projectContainingYourFunctions'
cycles = load 'promotions-cycles.groovy'
stage "Dev"
cycles.dev()
stage "QA"
cycles.qa()
// Your other promotion cycles calls...
}
The biggest advantages is that your promotions cycles code is commited in your Git repository and that all your stages output is actually a part of your pipeline output, which is great for easy debugging.
Plus you can easily apply conditions based on the success/failure of your functions (e.g. if your QA stage fails you don't want to go any further).
Note that both solutions should allow you to launch your promotions cycles in parallel if needed and to pass parameters to either your jobs or functions.
It is better to call each build in separate pipeline stages. Something like this:
stage "Dev"
node{
build job: 'Dev', parameters:
[
[$class: 'StringParameterValue', name: 'param', value: "param"],
];
}
stage "QA"
node{
build job: 'QA'
}
etc...
To cycle this process you ca use retry option or endless cycle in Groovy
I'm new to Jenkins and I've been tasked with a simple task of passing the output from one pipeline to the other.
Lets say that the first pipeline has a script that says echo HelloWorld, how would i pass this output to another pipeline so it displays the same thing.
I've looked at parameterized triggers and couple of other answers but I was hoping if someone could layout the step by step procedure to me.
If you want to implement it purely with Jenkins pipeline code - what I do is have an orchestrator pipeline job that builds all the pipeline jobs in my process, waits for them to complete then gets the build number:
Orchestrator job
def result = build job: 'jobA'
def buildNumber = result.getNumber()
echo "jobA build number : ${buildNumber}"
In each job like say 'jobA' I arrange to write the output to a known file (a properties file for example) which is then archived:
jobA
writeFile encoding: 'utf-8', file: 'results.properties', text: 'a=123\r\nb=foo'
archiveArtifacts 'results.properties'
Then after the build of each job like jobA, use the build number and use the Copy Artifacts plugin to get the file back into your orchestrator job and process it however you want:
Orchestrator job
step([$class : 'CopyArtifact',
filter : 'results.properties',
flatten : true,
projectName: 'jobA',
selector : [$class : 'SpecificBuildSelector',
buildNumber: buildNumber.toString()]])
You will find these plugins useful to look at:
Copy Artifact Plugin
Pipeline Utility Steps Plugin
If you are chaining jobs instead of using an orchestrator - say jobA builds jobB builds jobC etc - then you can use a similar method. CopyArtifacts can copy from the upstream job or you can pass parameters with the build number and name of the upstream job. I chose to use an orchestrator job after changing from chained jobs because I need some jobs to be built in parallel.