I have below snippet for getting the matching job name then trigger all of them run parallelly.
Shared library file CommonPipelineMethods.groovy
import jenkins.instance.*
import jenkins.model.*
import hudson.model.Result
import hudson.model.*
import org.jenkinsci.plugins.workflow.support.steps.*
class PipelineMethods {
def buildSingleJob(downstreamJob) {
return {
result = build job: downstreamJob.fullName, propagate: false
echo "${downstreamJob.fullName} finished: ${result.rawBuild.result}"
}
}
}
return new PipelineMethods();
The main Jenkinsfile script:
def commonPipelineMethods;
pipeline {
stages {
stage('Load Common Methods into Pipeline') {
def JenkinsFilePath = '/config/jenkins/jobs'
commonPipelineMethods = load "${WORKSPACE}${JenkinsFilePath}/CommonPipelineMethods.groovy"
}
stage('Integration Test Run') {
steps {
script {
matchingJobs = commonPipelineMethods.getIntegrationTestJobs(venture_to_test, testAgainst)
parallel matchingJobs.collectEntries{downstreamJob-> [downstreamJob.name, commonPipelineMethods.buildSingleJob(downstreamJob)]}
}
}
}
}
}
The script works fine but looking at from Map .... parallel step for parallel the script is a bit busy and not easy to get it. The main purpose of this is I want to reduce the pipeline script to be cleaner and easy for others to help maintain. Something simple like calling the external methods as matchingJobs = commonMethods.getIntegrationTestJobs(venture, environment), so others can understand it right away and know what the code does in this context.
I tried several ways to improve it but seem if it put part of them into building the single job outside the pipeline itself but into the external library, for example
def buildSingleJobParallel (jobFullName) {
String tempPipelineResult = 'SUCCESS'
result = build job: jobFullName, propagate: false
echo "${jobFullName} finished: ${result.rawBuild.result.toString()}"
if (result.rawBuild.result.isWorseThan(Result.SUCCESS)) {
tempPipelineResult = 'FAILURE'
}
}
then Jenkins prompted me that
groovy.lang.MissingMethodException: No signature of method: PipelineMethods.build() is applicable for argument types: (java.util.LinkedHashMap) values: [[job:test_1, propagate:false]]
I can understand that build() method is from Jenkins Pipeline Build Steps Plugins, but I failed to import it and use it inside that commonMethods library (this local library I just use load () method in the very first phase of my pipeline script.
So my question is
Could I use Jenkins Pipeline Build Step plugins inside the external library I mentioned above?
If it's not possible for the first question, I wonder if there's any cleaner way to make my script simpler and cleaner?
Thanks, everybody!
not sure if it runnable and looks clearer but i just tried to put all together from question and comments
//function that returns closure to be used as one of parallel jobs
def buildSingleJobParallel(steps, mjob){
return {
def result = steps.build job: mjob.fullName, propagate: false
steps.echo "${mjob.fullName} finished: ${steps.result.rawBuild.result}"
if (result.rawBuild.result.isWorseThan(Result.SUCCESS)) {
steps.currentBuild.result = 'FAILURE'
}
}
}
stage('Integration Test Run') {
steps {
script {
//build map<jobName, Closure> and run jobs in parallel
parallel matchingJobs.collectEntries{mjob-> [mjob.name, buildSingleJobParallel(this, mjob)]}
}
}
}
Related
following code is throwing below error.
if(!SkipLanguageComponentTests){
^
WorkflowScript: : Groovy compilation error(s) in script. Error(s): "Ambiguous expression could be either a parameterless closure expression or an isolated open code block;
solution: Add an explicit closure parameter list,
script {
2 errors
`
def SkipLanguageComponentTests = false;
pipeline {
parameters {
booleanParam(name: 'SkipLanguageComponentTests', defaultValue: false, description: 'XYZ')
}
stages {
stage('Checkout Source') {
steps {
checkout scm
}
}
stage("Component & Language Tests"){
steps{
parallel (
"componentTestsTask":{
//component test start
dir("docker"){
sh script: "docker-compose -f blah blah\""
}
// some xyz step here
//component test ends here
},
"integrationTestTasks":{
// language test script starts
if(!SkipLanguageComponentTests){
//run lang test and publish report
} else {
echo "Skip Language Component Tests"
}
// language test script ends
}
)
}
}
}
`
I have tried as per the documentation https://www.jenkins.io/blog/2017/09/25/declarative-1/
I have tried this based on the answer mentioned in : Running stages in parallel with Jenkins workflow / pipeline
stage("Parallel") { steps { parallel ( "firstTask" : { //do some stuff }, "secondTask" : { // Do some other stuff in parallel } ) } }
Can someone help me to resolve this ?
Ok, here is the working version of your pipeline - with proper IF inside:
parameters {
booleanParam(name: 'SkipLanguageComponentTests', defaultValue: false, description: '')
}
agent { label 'master' }
stages {
stage("Component & Language Tests") {
parallel {
stage("componentTestsTask") {
steps {
//component test start
echo "docker"
// some xyz step here
//component test ends here
}
}
stage("integrationTestTasks") {
steps {
script {
// language test script starts
if (!params.SkipLanguageComponentTests) {
echo "not skipped"
//run lang test and publish report
} else {
echo "Skip Language Component Tests"
}
}
}
// language test script ends
}
}
}
}
}
This pipeline is not optimal, use below information to improve it.
Notes:
you are using declarative pipeline so I think it is better to stay with parallel section expressed in declarative way
help is here: Jenkins doc about parallel
there is scripted pipeline as well Jenkins doc about scripted pipeline
as I stated in original answer, you have to use params to refer to input parameter properly
if you are using code you have to enclose it in script section - this is like putting scripted pipeline inside declarative one
IF statement can also be done declaratively: Jenkins doc about WHEN
I recommend not to mix these 2 styles: so if you are using both for some good reason they should be separated from one another as much as possible for example by using function or library inside declarative pipeline - the goal is to have pipeline as clear and readable as possible
You are using input variable, so try to refer as it should be done for input in Jenkins:
if(!params.SkipLanguageComponentTests)
I'm trying to generate Jenkins pipelines using the pipelineJob function in the jobDSL pluging, but cannot pass parameters from the DSL to the pipeline script. I have several projects that use what is essentially the same Jenkinsfile, with differences only in a few steps. I'm trying to use the JobDSL plugin to generate these pipelines on the fly, with the values I want changed in them interpreted to match the parameters to the DSL.
I've tried just about every combination of string interpretation that I can in the pipeline script, as well as in the DSL, but cannot get Jenkins/groovy to interpret variables in the pipeline script.
I'm calling the job DSL in a pipeline step:
def projectName = "myProject"
def envs = ['DEV','QA','UAT']
def repositoryURL = 'myrepo.com'
jobDsl targets: ['jobs/*.groovy'].join('\n'),
additionalParameters: [
project: projectName,
environments: envs,
repository: repositoryURL
],
removedJobAction: 'DELETE',
removedViewAction: 'DELETE'
The DSL is as follows:
pipelineJob("${project} pipeline") {
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
pipeline.groovy:
pipeline {
agent any
environment {
REPO = repository
}
parameters {
choice name: "ENVIRONMENT", choices: environments
}
stages {
stage('Deploy') {
steps {
echo "Deploying ${env.REPO} to ${params.ENVIRONMENT}..."
}
}
}
}
The variables that I pass in additionalParameters are interpreted in the jobDSL script; a pipeline with the correct name does get generated. The problem is that the variables are not passed to the pipeline script read from the workspace - the Jenkins configuration for the generated pipeline looks exactly the same as the file, without any interpretation on the variables.
I've made a number of attempts at getting the string to interpret, including a lot of variations of "${environments}", ${environments}, $environments, \$environments...I can't find any that work. I've also tried reading the file as a gstringImpl:
script("${readFileFromWorkspace(pipeline.groovy)}")
Does anyone have any ideas as to how I can make variables propagate down to the pipeline script? I know that I could just use a for loop to do string.replaceAll() on the script text, but that seems cumbersome; there's got to be a better way.
I've come up with a way to make this work. It's not what I'd prefer, which is having the string contents of the file implicitly interpreted during job creation, but it does work; it just adds an extra step.
import groovy.text.SimpleTemplateEngine
def fileContents = readFileFromWorkspace "pipeline.groovy"
def engine = new SimpleTemplateEngine()
template = engine.createTemplate(fileContents).make(binding.getVariables()).toString()
pipelineJob("${project} pipeline") {
displayName('Pipeline')
definition {
cps {
script(template)
}
}
}
This reads a file from your workspace, then uses it as a template with the binding variables. The other changes needed to make this work are escaping any variables used in your Jenkinsfile script, like \${VARIABLE} so that they are expanded at runtime, not at the time you build the job. Any variables you want to be expanded at job creation should be referenced as ${VARIABLE}.
You could achieve what you're trying to do by defining environment variables in the pipelineJob and then using those variables in your pipeline.
They are a bit limited because environment variables are strings, but it should work for basic stuff
Ex.:
//job-dsl
pipelineJob('example') {
environmentVariables {
// these vars could be specified by parameters of this job
env('repository', 'blah')
env('environments', "a,b,c"]) //comma separated string
}
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
}
And then in the pipeline:
//pipeline.groovy
pipeline {
agent any
environment {
REPO = env.repository
}
parameters {
choice name: "ENVIRONMENT", choices: env.environments.split(',')
//note the need to split the comma separated string above
}
}
You need to use the complete job name as a variable without the quotes. E.g., if JOBNAME is a parameter containing the entire job name:
pipelineJob(JOBNAME) {
displayName('Pipeline')
definition {
cps {
script(readFileFromWorkspace(pipeline.groovy))
}
}
}
When i load another groovy file in Jenkinsfile it show me following error.
"Required context class hudson.FilePath is missing
Perhaps you forgot to surround the code with a step that provides this, such as: node"
I made a groovy file which contains a function and i want to call it in my Declarative Jenkinsfile. but it shows an error.
My Jenkinsfile--->
def myfun = load 'testfun.groovy'
pipeline{
agent any
environment{
REPO_PATH='/home/manish/Desktop'
APP_NAME='test'
}
stages{
stage('calling function'){
steps{
script{
myfun('${REPO_PATH}','${APP_NAME}')
}
}
}
}
}
Result--
org.jenkinsci.plugins.workflow.steps.MissingContextVariableException: Required context class hudson.FilePath is missing
Perhaps you forgot to surround the code with a step that provides this, such as: node
Suggest me what is the right way to do it.
You either need to use a scripted pipeline and put "load" instruction inside the node section (see this question) or if you are already using a declarative pipeline (which seems to be the case), you can include it in "environment" section:
environment {
REPO_PATH='/home/manish/Desktop'
APP_NAME='test'
MY_FUN = load 'testfun.groovy'
}
We have to wrap with node {}, so that jenkins executors will execute on node, Incase if we would like to execute on any specific agent node, we can mention like node('agent name'){}
example here :
node {
def myfun = load 'testfun.groovy'
pipeline{
agent any
environment{
REPO_PATH='/home/manish/Desktop'
APP_NAME='test'
}
stages{
stage('calling function'){
steps{
script{
myfun('${REPO_PATH}','${APP_NAME}')
}
}
}
}
}
}
Loading the function in an initial script block inside the pipeline worked for me. Something like below:
def myfun
pipeline {
agent any
environment {
REPO_PATH='/home/manish/Desktop'
APP_NAME='test'
}
stages {
stage('load function') {
steps {
script {
myfun = load 'testfun.groovy'
}
}
}
stage('calling function') {
steps {
script {
myfun("${REPO_PATH}","${APP_NAME}")
}
}
}
}
}
I got this error message when I was calling a sh script that does not not exist in the repository / file system. Please look in the stack trace the following line:
at WorkflowScript.run(WorkflowScript:135)
The 135 marks the line in Jenkinsfile, on which the missing script or error is happening.
Another possibility is that due to earlier/underlying errors, the context has been removed for example by multiple executor machines. This happens if you are missing the node (e.g. script) -block, but especially at the post always -block. You can use the if -check also in other places. After this you get another error, that was causing this error message.
post {
always {
script {
//skip the step if context is missing
if (getContext(hudson.FilePath)) {
echo "It works"
}
}
}
}
See https://docs.cloudbees.com/docs/cloudbees-ci-kb/latest/troubleshooting-guides/how-to-troubleshoot-hudson-filepath-is-missing-in-pipeline-run
So I have a use case with Jenkinsfile that I know is not common, and I haven't found a solution for it yet.
Background
We currently have a multi-branch pipeline job configured to build multiple branches. This is uses to run system-testing of the products across multiple release. The Jenkins job
Clone all required repositories
Deploy the environment
Execute the automated test cases
Undeploy the environment
In order to avoid having to define the same Jenkinsfile on each branches, we created a shared library. The shared library defines the Declarative pipeline stages for the Jenkins file. The shared library has the following:
/* File name var/myStep.groovy */
def call(Map pipelineParams) {
callASharedLibraryFunction()
properties([
parameters(sharedLibraryGetParameters(pipelineParams))
])
pipeline {
// snip
stages {
stage("clone repos") { }
stage("Deploy environment") { }
stage("Executed Tests") { }
stage("Undeploy environment") { }
}
// post directives
}
}
And the Jenkins file simply defines a map, and then call myStep call.
e.g.:
/* Sample Jenkinsfile */
pipelineParams = [
FOO = "foo"
]
myStep pipelineParams
The problem
We now have a need for another Jenkins job, where some of the stages will be the same. For example, the new jobs will need to
Clone all required repositories
Deploy the environment
Do something else
And changing the behaviour of a common stage (e.g.: Clone the repo), should take effect across all the jobs that define this stage. I know we can use the when directive in the stage, however from a usability perspective, I want the jobs to be different as they are exercising different things. And the users of one job don't care about the additional stages the other job runs.
I want to avoid code duplication, and better yet, I don't want to duplicate the stage code. (including steps, when, post, etc..).
Is there a way a shared library can define the stage "implementation" with all the directives (steps, when, post, etc) once, but have it get called multiple times?
e.g.:
/* File: vars/cloneReposStageFunction.groovy */
def call() {
stage("Clone Repos") { }
}
/* File: vars/myStep.groovy */
def call(Map pipelineParams) {
pipeline {
// snip
stages {
cloneReposStageFunction()
stage("Deploy environment") { }
stage("Executed Tests") { }
stage("Undeploy environment") { }
}
// post directives
}
}
/* File: vars/myNewStep.groovy */
def call(Map pipelineParams) {
pipeline {
// snip
stages {
cloneReposStageFunction()
stage("Deploy environment") { }
stage("Do something else") { }
}
// post directives
}
}
It's an open Jenkins' Feature Request.
I've seen different ways to template a pipeline, but it's far from what you'd like to achieve.
I have the following Jenkinsfile:
pipeline {
agent any
environment { }
stages {
stage('stageA') {
steps {
... Do something with arg1, arg2 or arg3
}
}
stage('stageB') {
steps {
... Do something with arg1, arg2 or arg3
}
}
...
}
}
Is there anywhere I can specify a universal "pre-stage" or "post-stage" set of actions to perform? A use-case would be sending logging information at the end of a stage to a log manager, but it would be preferable to not copy and paste those invocations at the end of each and every stage.
As far as I know there is no generic post- or pre-stage hook in Jenkins pipelines. You can define post steps in a post section but you need one per stage.
However, if you don't want to repeat yourself, you have some options.
Use a shared lib
The place to put repeating code to it a shared library. That way allows you to declare your own steps using Groovy.
You need another repository to define a shared lib, but apart from that it is a pretty strait forward way and you can reuse the code in all of your Jenkins' pipelines.
Use a function
If you declare a function outside of the pipeline, you can call it from any stage. This is not really documented and might be prevented in the future. As far as I understand it messes with the coordination between master and agents. However, it works:
pipeline {
agent any
stages {
stage ("First") {
steps {
writeFile file: "resultFirst.txt", text: "all good"
}
post {
always {
cleanup "first"
}
}
}
stage ("Second") {
steps {
writeFile file: "resultSecond.txt", text: "all good as well"
}
post {
always {
cleanup "second"
}
}
}
post {
always {
cleanup "global" // this is only triggered after all stages, not after every
}
}
}
}
void cleanup(String stage) {
echo "cleanup ${stage}"
archiveArtifacts artifacts: "result*"
}