groovy current scope error in jenkins pipeline - jenkins

i neeed your help please
i'm working on groovy script to list all scm polling jobs.
the script is working fine on jenkins scripting console but when i integrate it in jenkinsfile and run it in pipeline i get this error :
12:51:21 WorkflowScript: 10: The current scope already contains a variable of the name it
12:51:21 # line 10, column 25.
12:51:21 def logSpec = { it, getTrigger -> String spec = getTrigger(it)?.getSpec(); if (spec ) println ("job_name " + it.name + " job_path " + it.getFullName() + " with spec " + spec )}
12:51:21 ^
12:51:21
12:51:21 1 error
12:51:21
12:51:21 at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310)
12:51:21 at org.codehaus.groovy.control.CompilationUnit.applyToSourceUnits(CompilationUnit.java:958)
Here is the jenkinsfile :
#!/usr/bin/env groovy
import hudson.triggers.*
import hudson.maven.MavenModuleSet
import org.jenkinsci.plugins.workflow.job.*
pipeline {
agent any
stages {
stage('list jobs with scm polling') {
steps {
def logSpec = { it, getTrigger -> String spec = getTrigger(it)?.getSpec(); if (spec ) println ("job_name " + it.name + " job_path " + it.getFullName() + " with spec " + spec )}
println("--- SCM Frequent Polling for Pipeline jobs ---")
Jenkins.getInstance().getAllItems(WorkflowJob.class).each() { logSpec(it, {it.getSCMTrigger()}) }
println("\n--- SCM Frequent Polling for FreeStyle jobs ---")
Jenkins.getInstance().getAllItems(FreeStyleProject.class).each() { logSpec(it, {it.getSCMTrigger()}) }
println("\n--- SCM Frequent Polling for Maven jobs ---");
Jenkins.getInstance().getAllItems(MavenModuleSet.class).each() { logSpec(it, {it.getTrigger(SCMTrigger.class)}) }
println("--- SCM Frequent Polling for Abstract jobs---")
Jenkins.getInstance().getAllItems(AbstractProject.class).each() { logSpec(it, {it.getTrigger(SCMTrigger.class)}) }
println '\nDone.'
}} }}
Does anyone can help ?
thanksss

it is an implicit variable that is provided in closures, when the closure doesn't have an explicitly declared parameter. So when you declare a parameter, make sure it is not called it to avoid conflicts with parent scopes that already define it (in your case the closure of .each()).
Also, to integrate a script section in a pipeline, either use the script step or define a function that you could call like a built-in step.
Lastly, .each() doesn't work well in pipeline code, due to the restrictions imposed by the CPS transformations applied by Jenkins to the pipeline code (unless tagged #NonCPS - which has other restrictions). So .each() should be replaced by a for loop.
pipeline {
agent any
stages {
stage('list jobs with scm polling') {
steps {
script {
def logSpec = { job, getTrigger -> String spec = getTrigger(job)?.getSpec(); if (spec ) println ("job_name " + job.name + " job_path " + job.getFullName() + " with spec " + spec )}
println("--- SCM Frequent Polling for Pipeline jobs ---")
for( item in Jenkins.getInstance().getAllItems(WorkflowJob.class) ) {
logSpec( item, {item.getSCMTrigger()})
}
// ... other code ...
println '\nDone.'
}
}} }}
Variant with separate function:
pipeline {
agent any
stages {
stage('list jobs with scm polling') {
steps {
doStuff()
}} }}
void doStuff() {
def logSpec = { job, getTrigger -> String spec = getTrigger(job)?.getSpec(); if (spec ) println ("job_name " + job.name + " job_path " + job.getFullName() + " with spec " + spec )}
println("--- SCM Frequent Polling for Pipeline jobs ---")
for( item in Jenkins.getInstance().getAllItems(WorkflowJob.class) ) {
logSpec( item, {item.getSCMTrigger()})
}
// ... other code ...
println '\nDone.'
}

Related

What's the best way to validate users conformed to requirements of a jenkins pipeline?

I'm attempting to validate that all Jenkins pipelines, at least in a single group/organization, have published their junit tests. Is there a way to programmatically do this? Also, would it be relegated to Jenkinsfiles or work on all pipelines? Thanks!
I could manually check this via looking for the "Test Results" on the page that I have included the image for below. This indicates that the job has published Test Results to the JUnit plugin.
If I were to write a Jenkinsfile, it might look something like this. But it is possible to attach these to the JUnit pipeline via manual methods as well:
pipeline {
agent any
stages {
stage('Compile') {
steps {
// Login to Repository
configFileProvider([configFile(fileId: 'nexus_maven_configuration', variable: 'MAVEN_SETTINGS')]) {
sh 'mvn -s $MAVEN_SETTINGS compile'
}
}
}
stage('Test') {
steps {
configFileProvider([configFile(fileId: 'nexus_maven_configuration', variable: 'MAVEN_SETTINGS')]) {
sh 'mvn -s $MAVEN_SETTINGS test'
}
}
}
}
post {
always {
junit '**/target/surefire-reports/*.xml'
archive 'target/*.jar'
}
}
}
Here is a script you can use to check whether you have tests attached for Jobs in a specific Subdirectory. You can either run this through a Pipeline or using the Jenkins Script Console.
def subFolderToCheck = "folder1" // We will only check Jobs in a specific sub directory
Jenkins.instance.getAllItems(Job.class).each { jobitem ->
def jobName = jobitem.getFullName()
def jobInfo = Jenkins.instance.getItemByFullName(jobName)
// We will check if the last successfull build has any tests attached.
if(jobName.contains(subFolderToCheck) && jobInfo.getLastSuccessfulBuild() != null) {
def results = jobInfo.getLastSuccessfulBuild().getActions(hudson.tasks.junit.TestResultAction.class).result
println("Job : " + jobName + " Tests " + results.size())
if(results == null || results.size() <= 0) {
print("Job " + jobName + " Does not have any tests!!!!!")
}
}
}

stages in parallel branches of scripted pipeline wait for all previous stages to complete before running

I have a scripted pipeline with parallel branches on different configurations.
def platforms = ['SLES11', 'SLES12']
def build_modes = ['debug', 'release']
def prepareStages(def build_mode, def os) {
return {
node(os){
stage('Build ' + build_mode + ' ' + os){
//do stuff
}
stage('Install ' + build_mode + ' ' + os){
//do stuff
}
stage('Run tests ' + build_mode + ' ' + os){
//do stuff
}
}
}
}
stage('Setup environment'){
node('SLES12'){
//do stuff
}
}
stage('Fetch Sources'){
node('sys_utdb_gh_iil_sles12'){
//do stuff
}
}
stage("PIPELINE") {
def branches = [:]
platforms.each { o ->
build_modes.each { m ->
branches[m + ' ' + o] = prepareStages(m, o)
}
}
parallel branches
}
When I look at the pipeline in blue ocean, I see that until "Build ..." stages in all branches haven't finished - "Install ..." stages in the branches where build did finish do not start.
I saw many different ways to run pipelines with parallel stages, this way I took from one example. Is there another way that will allow the branches to be independent?

How to give a Build Name to your Pipeline job?

I want to give specific names to every Pipeline Job that I build. Eg. #Build_number parameter1 parameter2
I have done it in the freestyle project job, but can't find how to do it in Pipeline project.
you can use below script section in any stage inside your pipeline
pipeline {
agent any
stages {
stage("Any stage"){
steps {
script {
currentBuild.displayName = '#' + currentBuild.number +
'-' + params.parameter1 +
'-' + params.parameter2
currentBuild.description = "The best description."
}
}
}
}
}

Jenkins pipeline - "cannot invoke method on null object" on function outside the pipeline

i get the error above when trying to run my pipeline
tried to run it inside and outside the groovy sandBox.
also tried debugging and it's fails on this method call "last_build_number(lastBuildToGenerateNumber)"
before adding try, catch and recursion this code was working well outside the pipeline. don't get me wrong - this code can not run inside the pipeline so i did not try it.
/*
SEIIc DPS_NIGHTLY_BUILD JenkinsFile
*/
def buildDescription // for setting the build name, based on the downstream jobs name
def last_build_number(build) {
println 'the display name is' + build.displayName
return build.displayName
if (build != null) {
if(build.displayName!=null){
println 'the display name is' + build.displayName
return build.displayName
}
}
else {
return '0.0.0.0'
}
return '0.0.0.0'
}
def autoIncBuildNightlyNumber(build) {
def debugEnable = 1
println 'build is: ' + build.displayName
def lastBuildToGenerateNumber = build; //a build variable
def last_build_number; //last build number i.e: "52.88.0.7" or "#43"
build_number=0;
try{
println 'last build to genreate from ' + lastBuildToGenerateNumber.displayName
last_build_number = last_build_number(lastBuildToGenerateNumber);
if (debugEnable == 1) println 'last successfull build: ' + last_successfull_build
def tokens = last_build_number.tokenize('.')
if (debugEnable == 1) println 'tokens: ' + tokens
// update global variable - if it's not a legit number the crash will be catched
build_number = tokens[3].toInteger() + 1
if (debugEnable == 1) println 'new build number: ' + build_number
return build_number
} catch (e){
if (debugEnable == 1) println 'error is ' + e
if (debugEnable == 1) println 'build number: ' + build_number + ' is not valid. recurse now to find a valid number'
build_number = autoIncBuildNightlyNumber(lastBuildToGenerateNumber.getPreviousBuild());
println 'genrate ' + lastBuildToGenerateNumber
return build_number
}
}
// Declarative Pipeline
pipeline {
/*
maximum time for this job
*/
options { //maximum time for this job
timeout(time: 1, unit: 'HOURS')
}
environment {
AUTO_BUILD_NUMBER = autoIncBuildNightlyNumber(currentBuild.getPreviousBuild())
PLASTICSCM_TARGET_SERVER = "g-plasticscm-server.gilat.com:8087"
PLASTICSCM_TARGET_REPOSITORY = "SEIIc_DPS"
PLASTICSCM_WORKSPACE_NAME = "${env.JOB_BASE_NAME}_${env.BUILD_NUMBER}"
AUTOMATION_FOLDER = "${env.PLASTICSCM_WORKSPACE_NAME}\\Tools\\Automation"
Branch = "/main"
TEST_BRANCH = "/QualiTest for SW Automation"
QUALITEST_FOLDER = "${env.PLASTICSCM_WORKSPACE_NAME}\\QualiTest for SW Automation"
PLASTICSCM_TEST_REPOSITORY="SW_Utiles"
PLASTICSCM_TEST_WORKSPACE = "TEST_${env.JOB_BASE_NAME}_${env.BUILD_NUMBER}"
}
// Select target host for building this pipeline
agent { node { label "SEIIc_DPS" } }
// Stages to run for this pipeline
stages {
/*
Checkout files from source control. In this case the pipeline use PlasticSCM plugin to checkout a branch with given parameter "Branch".
When this stage run, it will checkout the branch in the parameter string from the defined repository and server.
It will not
*/
stage('SCM Checkout') {
steps {
cm branch: env.Branch, changelog: true, repository: env.PLASTICSCM_TARGET_REPOSITORY, server: env.PLASTICSCM_TARGET_SERVER, useUpdate: false, workspaceName: env.PLASTICSCM_WORKSPACE_NAME
//checkOut QualiTest
cm branch: env.TEST_BRANCH, changelog: false, repository: 'SW_Utiles', server: env.PLASTICSCM_TARGET_SERVER, useUpdate: false, workspaceName: env.PLASTICSCM_TEST_WORKSPACE
}
}
}//stages
}//pipeline

Correct way to structure a jenkins groovy pipeline script

I wrote a pipeline that works with jeknins but as a newbie to jenkins scripting I've a lot of stuffs that are not clear to me, Here's the whole script, I'll express the issues below
SCRIPT:
node()
{
def libName = "PROJECT"
def slnPath = pwd();
def slnName = "${slnPath}\\${libName}.sln"
def webProject = "${slnPath}\\PROJECT.Web\\PROJECT.Web.csproj"
def profile = getProperty("profiles");
def version = getProperty("Version");
def deployFolder = "${slnPath}Deploy";
def buildRevision = "";
def msbHome = "C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Professional\\MSBuild\\15.0\\Bin\\msbuild.exe"
def msdHome = "C:\\Program Files (x86)\\IIS\\Microsoft Web Deploy V3\\msdeploy.exe"
def nuget = "F:\\NugetBin\\nuget.exe";
def assemblyScript = "F:\\Build\\Tools\\AssemblyInfoUpdatePowershellScript\\SetAssemblyVersion.ps1";
def webserverName ="192.168.0.116";
def buildName = "PROJECT";
def filenameBase ="PROJECT";
stage('SCM update')
{
checkout([$class: 'SubversionSCM', additionalCredentials: [], excludedCommitMessages: '', excludedRegions: '', excludedRevprop: '', excludedUsers: '', filterChangelog: false, ignoreDirPropChanges: false, includedRegions: '', locations: [[credentialsId: '08ae9e8c-8db8-43e1-b081-eb352eb14d11', depthOption: 'infinity', ignoreExternalsOption: true, local: '.', remote: 'http://someurl:18080/svn/Prod/Projects/PROJECT/PROJECT/trunk']], workspaceUpdater: [$class: 'UpdateWithRevertUpdater']])
}
stage('SCM Revision')
{
bat("svn upgrade");
bat("svn info \"${slnPath}\" >revision.txt");
for (String i : readFile('revision.txt').split("\r?\n"))
{
if(i.contains("Last Changed Rev: "))
{
def splitted = i.split(": ")
echo "Revisione : "+ splitted[1];
buildName += "." + splitted[1];
currentBuild.displayName = buildName;
buildRevision += version + "." + splitted[1];
}
}
}
stage("AssemblyInfo update")
{
powerShell("${assemblyScript} ${buildRevision} -path .")
}
stage('Nuget restore')
{
bat("${nuget} restore \"${slnName}\"")
}
stage('Main build')
{
bat("\"${msbHome}\" \"${slnName}\" /p:Configuration=Release /p:PublishProfile=Release /p:DeployOnBuild=true /p:Profile=Release ");
stash includes: 'Deploy/Web/**', name : 'web_artifact'
stash includes: 'PROJECT.Web/Web.*', name : 'web_config_files'
stash includes: 'output/client/release/**', name : 'client_artifact'
stash includes: 'PROJECT.WPF/App.*', name : 'client_config_files'
stash includes: 'PROJECT.WPF/Setup//**', name : 'client_setup'
}
stage('Profile\'s customizations')
{
if (profile != "")
{
def buildProfile = profile.split(',');
def stepsForParallel = buildProfile.collectEntries {
["echoing ${it}" : performTransformation(it,filenameBase,buildRevision)]
}
parallel stepsForParallel;
}
}
post
{
always
{
echo "mimmo";
}
}
}
def powerShell(psCmd) {
bat "powershell.exe -NonInteractive -ExecutionPolicy Bypass -Command \"\$ErrorActionPreference='Stop';[Console]::OutputEncoding=[System.Text.Encoding]::UTF8;$psCmd;EXIT \$global:LastExitCode\""
}
def performTransformation(profile,filename,buildRevision) {
return {
node {
def ctt ="F:\\Build\\Tools\\ConfigTransformationTool\\ctt.exe";
def nsiTool = "F:\\Build\\Tools\\NSIS\\makensis.exe";
def slnPath = pwd();
unstash 'web_artifact'
unstash 'web_config_files'
def source = 'Deploy/Web/Web.config';
def transform = 'PROJECT.Web\\web.' + profile + '.config';
bat("\"${ctt}\" i s:\"${source}\" t:\"${transform}\" d:\"${source}\"" )
def fname= filename + "_" + profile + "_" + buildRevision + "_web.zip";
if (fileExists(fname))
bat("del "+ fname);
zip(zipFile:fname, dir:"Deploy\\Web")
archiveArtifacts artifacts: fname
//Now I generate the client part
unstash 'client_artifact'
unstash 'client_config_files'
unstash 'client_setup'
def sourceClient = 'output/client/release/PROJECT.WPF.exe.config';
def transformClient = 'PROJECT.WPF/App.' + profile + '.config';
bat("\"${ctt}\" i s:\"${sourceClient}\" t:\"${transformClient}\" d:\"${sourceClient}\"" )
def directory = new File(pwd() + "\\output\\installer\\")
if(!directory.exists())
{
bat("mkdir output\\installer");
}
directory = new File( pwd() + "\\output\\installer\\${profile}")
if(!directory.exists())
{
echo " directory does not exist";
bat("mkdir output\\installer\\${profile}");
}
else
{
echo " directory exists";
}
def filename2= filename + "_" + profile + "_" + buildRevision + "_client.zip";
bat("${nsiTool} /DAPP_VERSION=${buildRevision} /DDEST_FOLDER=\"${slnPath}\\output\\installer\\${profile}\" /DTARGET=\"${profile}\" /DSOURCE_FILES=\"${slnPath}\\output\\client\\release\" \"${slnPath}\\PROJECT.WPF\\Setup\\setup.nsi\" ");
if (fileExists(filename2))
bat("del "+ filename2);
zip(zipFile:filename2, dir:"output\\installer\\" + profile);
archiveArtifacts artifacts: filename2
}
}
};
The series of questions are:
I've seen some script where everything is wrapped in a pipeline {}, is this necessary or does Jenkins pipeline plugin paste it?
I really dislike to have all those definitions inside the node and then replicated below.
I don't see inside the Jenkins workflow the parallelism, even if I've 4 executors in idle.
I'm not able to call the post pipeline event to clear the workspace (rigth now It's just en echo
There are 2 types of pipeline. Straight groovy like you have written is referred to as a scripted pipeline. The style that has the pipeline{} block around it is a declarative style pipeline. The declarative tends to be easier for newer Pipeline users and is a good choice for starting out with a pipeline. Many pipelines don't need the complexity that scripted allows.
This is groovy. If you want to declare a bunch of variables, you have to do it somewhere. Otherwise you hard-code those values in your script somewhere. In groovy, you don't HAVE to declare every variable, but you have to define it somewhere, and unless you know how the declaration is going to affect scope, you should just declare them. Most programming languages require some kind of variable declaration, especially when you have to worry about scope, so I don't see that this is a problem. I think it is very clean to define all of the variable values in one place at the top. Easier for maintenance.
At first glance, your parallel execution looks like it should work, but unless I set this up and ran it, it is hard to say. It could be that the parallel parts are running fast enough that the UI doesn't update. You should be able to see in the console output if these are running in parallel.
The post pipeline block is not available in scripted pipeline. That is part of the declarative pipeline syntax. In scripted, to do similar things you have to use try/catch to catch errors and run post-type things.

Resources