Converting a gstringimpl to java.lang.string in a Jenkinsile - jenkins

I have a pipeline which takes a persistent string parameter input. The pipeline then checks whether the parameter value is present in a list.
The problem is that the persisted string is of type gstringimpl, and the list items are java.lang.string type. When I use the .contains() method, even though the value is in the list, it won't return true, which I believe is due to the different data types.
I've tried everything online, including the toString() method but I can't get it to work. I'm attaching my code below.
String ver = ""
pipeline {
agent {
docker{
image 'registry/abc/builder:0.1.5'
args '-t -d -v maven-m2-cache:/home/node/.m2'
}
}
parameters {
persistentString(name: 'Version', defaultValue: '8.4.7.8', description: 'Version to build', successfulOnly: false)
}
stages {
stage('Analyze Parameter'){
steps{
script{
ver = "${Version}".toString()
}
}
}
stage('Build'){
steps{
script{
def version_list1 = ['8.4.7.8','8.3.7.9','8.5.4.7']
if (version_list1.contains("${ver}")){
println("build version branch")
} else {
println("${ver}")
println("${ver}".getClass())
println(version_list1[0])
println(version_list1[0].getClass())
println("build master branch")
}
}
}
}
}
}
The pipeline always goes into the else block and prints the following:
8.4.7.8
class org.codehaus.groovy.runtime.GStringImpl
8.4.7.8
java.lang.string
build master branch

Don't use String interpolation to resolve Parameters. Instead directly access it like params.PARAM_NAME, example below.
script{
def version_list1 = ['8.4.7.8','8.3.7.9','8.5.4.7']
if (version_list1.contains(params.Version)){
println("build version branch")
} else {
println("build master branch")
}
}

Related

how to set string input default value when the input value is null in jenkins pipeline

I am set a string parameter in Jenkins pipeline(groovy script) like this:
def call(String type, Map map) {
if (type == "gradle") {
pipeline {
agent any
parameters {
string(name: 'k8sResourceType', defaultValue: "${map.k8sResourceType}", description: 'Kubernetes Resource Type')
}
}
}
is it possible to set a default value when ${map.k8sResourceType} is null? if the ${map.k8sResourceType} is null set the Kubernetes resource type to Deployment. Because 90% of apps are Deployment and only special apps are StatefulSet in Kubernetes. I am a newbie in groovy.
You better use environment instead of parameters to achieve what you want
pipeline {
agent any
environment {
k8sResourceType = getResourceType(map.k8sResourceType)
}
stages {
stage('Hello World') {
steps {
echo "Value: ${env.k8sResourceType}"
}
}
}
}
def getResourceType(value) {
return value == null ? "Deployment" : value
}

how to get jenkins parameter's default value

I want to have a check in jenkins job, where if the jenkins parameters have been modified then I want to display an Input modal to proceed or abort the job .But for that I need to verify if the parameters have been modified.
So, I can get modified parameters value in jenkins job, but how to retrieve default values of those parameters so that I can verify if any parameters have been modified?
def DEFAULT_VALUE = "42"
pipeline {
agent any
parameters { string(name: 'MY_PARAM', defaultValue: DEFAULT_VALUE, description: '') }
stages {
stage('Example') {
steps {
script {
if (params.MY_PARAM == DEFAULT_VALUE) {
echo 'Default value used'
} else {
echo 'Non-default value used'
}
}
}
}
}

Value returned from a script does not assigned to a variable declared in jenkins declarative pipeline stage

I am working on adding a jenkins Declarative pipeline for automation testing. In the test run stage i want to extract the failed tests from the log. i am using a groovy function for extracting the test result. this function is not a part of the jenkins pipeline. It is another script file. The function works fine and it build a string containing the failure details. Inside a pipeline stage i am calling this function and assinging the returned string to another variable. But when i echo the variable value it prints empty string.
pipeline {
agent {
kubernetes {
yamlFile 'kubernetesPod.yml'
}
}
environment{
failure_msg = ""
}
stages {
stage('Run Test') {
steps {
container('ansible') {
script {
def notify = load('src/TestResult.groovy')
def result = notify.extractTestResult("${WORKSPACE}/testreport.xml")
sh "${result}"
if (result != "") {
failure_msg = failure_msg + result
}
}
}
}
}
post {
always {
script {
sh 'echo Failure message.............${failure_msg}'
}
}
}
}
here 'sh 'echo ${result}'' print empty string. But 'extractTestResult()' returns a non-empty string.
Also i am not able to use the environment variable 'failure_msg' in post section it return an error 'groovy.lang.MissingPropertyException: No such property: failure_msg for class: groovy.lang.Binding'
can anyone please help me with this ?
EDIT:
Even after i fixed the string interpolation, i was getting the same
error. That was because jenkins does not allow using 'sh' inside
docker container. there is an open bug ticket in jenkins issue
board
I would suggest to use a global variable for holding the error message. My guess is that the variable is not existing in your scope.
def FAILURE_MSG // Global Variable
pipeline {
...
stages {
stage(...
steps {
container('ansible') {
script {
...
if (result != "") {
FAILURE_MSG = FAILURE_MSG + result
}
}
}
}
}
post {
always {
script {
sh "${FAILURE_MSG}" // Hint: Use correct String Interpolation
}
}
}
}
(Similar SO question can be found here)

Last successful build's revision for an upstream MultiBranch Job in Jenkins Declarative Pipeline

I'd like to get the build revisions of the last successful builds of Upstream jobs. The upstream jobs are multibranch jobs.
So far I'm generating a list of upstream jobs' names as triggers. But I can't seem to find the right method to call.
import jenkins.model.Jenkins
def upstreamPackages = ['foo', 'bar']
def upstreamJobs = upstreamPackages.collect { "${it}-multibranch/master" }.join(',')
pipeline {
agent none
triggers {
upstream(upstreamProjects: upstreamJobs,
threshold: hudson.model.Result.SUCCESS)
}
stages {
stage('test'){
steps{
script {
upstreamJobs.each {
println it
job = Jenkins.instance.getItem(it)
job.getLastSuccessfulBuild()
revision = job.getLastSuccessfulBuild().changeset[0].revision
println revision
}
}
}
}
}
}
This results in a null object for item. What's the correct way to do this?
UPDATE 1
After discovering the Jenkins Script Console and this comment, I managed to come up with the folllowing:
import jenkins.model.Jenkins
import hudson.plugins.git.util.BuildData
def upstreamPackages = ['foo', 'bar']
def upstreamJobsList = upstreamPackages.collect { "${it}-multibranch/master" }.join(',')
#NonCPS
def resolveRequirementsIn(packages){
BASE_URL = 'git#github.com:myorg'
requirementsIn = ''
packages.each { pkg ->
revision = getLastSuccessfulBuildRevision("${pkg}-multibranch")
requirementsIn <<= "-e git+${BASE_URL}/${pkg}.git#${revision}#egg=${pkg}\n"
}
println requirementsIn
return requirementsIn
}
#NonCPS
def getLastSuccessfulBuildRevision(jobName){
project = Jenkins.instance.getItem(jobName)
masterJob = project.getAllItems().find { job -> job.getName() == 'master' }
build = masterJob.getLastSuccessfulBuild()
return build.getAction(BuildData.class).getLastBuiltRevision().sha1String
}
pipeline {
agent { label 'ci_agent' }
triggers {
upstream(upstreamProjects: upstreamJobsList,
threshold: hudson.model.Result.SUCCESS)
}
stages {
stage('Get artifacts'){
steps{
script{
requirementsIn = resolveRequirementsIn upstreamPackages
writeFile file: 'requirements.in', text: requirementsIn
}
}
}
}
}
It's throwing an error:
an exception which occurred:
in field org.jenkinsci.plugins.pipeline.modeldefinition.withscript.WithScriptScript.script
in object org.jenkinsci.plugins.pipeline.modeldefinition.agent.impl.LabelScript#56d1724
in field groovy.lang.Closure.delegate
in object org.jenkinsci.plugins.workflow.cps.CpsClosure2#27378d57
in field groovy.lang.Closure.delegate
in object org.jenkinsci.plugins.workflow.cps.CpsClosure2#6e6c3c4e
in field org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.closures
in object org.jenkinsci.plugins.workflow.cps.CpsThreadGroup#5d0ffef3
in object org.jenkinsci.plugins.workflow.cps.CpsThreadGroup#5d0ffef3
Caused: java.io.NotSerializableException:
org.jenkinsci.plugins.workflow.multibranch.WorkflowMultiBranchProject
The problem was that Jenkins' Pipeline DSL requires all assigned objects to be Serializable.
Jenkins.instance.getItem(jobName) returns a WorkflowMultiBranchProject which is not Serializable. Neither is Jenkins.instance.getItem(jobName).getItem('master') which is a WorkflowJob object.
So I invariably went down the call chain to what I needed replacing variable assignments with chained method calls and came up with the following solution.
def upstreamPackages = ['foo', 'bar']
def upstreamJobsList = upstreamPackages.collect { "${it}-multibranch/master" }.join(',')
def String requirementsInFrom(packages){
final BASE_URL = 'git#github.com:myorg'
requirementsIn = ''
packages.each{ pkg ->
revision = Jenkins.instance.getItem("${pkg}-multibranch")
.getItem('master')
.getLastSuccessfulBuild()
.getAction(BuildData.class)
.getLastBuiltRevision()
.sha1String
requirementsIn <<= "-e git+${BASE_URL}/${pkg}.git#${revision}#egg=${pkg}\n"
}
return requirementsIn.toString()
}

Call Groovy method inside Jenkinsfile's environment block

I have a Jenkinsfile that looks like this:
static def randomUser() {
final def POOL = ["a".."z"].flatten()
final Random rand = new Random(System.currentTimeMillis())
return (0..5).collect { POOL[rand.nextInt(POOL.size())] }.join("")
}
pipeline {
agent any
environment {
//CREATOR = sh(script: "randomUser()", returnStdout: true)
CREATOR = "fixed-for-now"
...
}
stages {
...
stage("Terraform Plan") {
when { not { branch "master" } }
steps {
sh "terraform plan -out=plan.out -var creator=${CREATOR} -var-file=env.tfvars "
}
}
...
stage("Terraform Destroy") {
when { not { branch "master" } }
steps {
sh "terraform destroy -auto-approve -var creator=${CREATOR} -var-file=env.tfvars "
}
}
...
}
My problem is I cannot call randomUser while being inside the environment block. I would need to have the CREATOR variable as a random string every time. I would prefer to have CREATOR as a global environment variable since it's going to be used in many stages.
Is there a way to achieve (or workaround) this?
Given your specific use case, it might be better to use the CREATOR variable as a parameter instead of an environment variable, and to assign its defaultValue as the return of your randomUser method.
pipeline {
agent any
parameters {
string(name: 'CREATOR', defaultValue: sh(script: "randomUser()", returnStdout: true))
}
...
}
You can then use it in your pipeline like so:
stage("Terraform Plan") {
when { not { branch "master" } }
steps {
sh "terraform plan -out=plan.out -var creator=${params.CREATOR} -var-file=env.tfvars "
}
}
This way you have a correctly assigned and useful defaultValue for CREATOR, but with the ability to override it per-pipeline when necessary.
You can achieve this by removing environment block and defining global variable CREATOR before pipeline block
def CREATOR
pipeline {
agent any
stages {
stage('Initialize the variables') {
steps{
script{
CREATOR = randomUser()
}
}
}
...

Resources