Groovy - Jenkins Pipeline - Groovy CPS doesn't go trough .eachLine method - jenkins

I am trying to run this code inside Jenkins Pipeline script:
def getTags = { svnurl ->
def command = ["svn","ls","${svnurl}"];
def proc = command.execute()
proc.waitFor()
proc.in.eachLine {
println(it)
}
}
getTags('http://svnurlexample.net/');
The result should be a list of folders at the svn location but what I am getting is an error:
[Pipeline] echo:
1.0.0/
expected to call java.lang.ProcessImpl$ProcessPipeInputStream.eachLine but wound up catching org.jenkinsci.plugins.workflow.cps.CpsClosure2.call
The proc.in.eachLine is causing the issue, as if Groovy finds the first folder on the location but can not handle the rest and reports an error.

This is what worked for me:
#NonCPS
def getTags (svnurl) {
def command = ["svn","ls","${svnurl}"];
def proc = command.execute()
proc.waitFor()
proc.in.eachLine {
println(it)
}
}
getTags('http://svnurlexample.net/');

Related

How to execute the groovy script written on the Jenkins parameters in slave node

My small piece of code
def proc ='./test.py'.execute()
proc.waitFor()
def output = proc.in.text
def exitcode = proc.exitValue()
def error = proc.err.text
return output.tokenize()
This above groovy script will execute from one of the Active Choice Reactive Reference Parameter in my Jenkins pipeline. Is there anyway to execute this from different slave. I don't have idea that the groovy script written in parameter will execute from other slave or not..
Could someone help me to achieve this?
You can try this
pipeline {
agent {
node { label "name-of-slave-jenkins"}
}
stages {
stage('stage 1') {
steps {
script {
def proc ='./test.py'.execute()
proc.waitFor()
}
}
}
stage('stage 2') {
steps{
script{
def output = proc.in.text
def exitcode = proc.exitValue()
def error = proc.err.text
return output.tokenize()
}
}
}
}
}

How to use correctly cps notation in Jenkins pipeline

I am trying to write some code in Jenkins, but my knowledge is quite limited, I need to read some information (the commiter for that CL) from a xml file that Perforce generates(SCM) I then use that information I get, in another function to send an email in case the static analysis finds an error, the thing is I keep getting the expected to call WorkflowScript.sendEmailNewErrors but wound up catching readJSON Error, I have gone through the CPS documentation The output tells me but honestly it is still not clear to me what is wrong. My pipeline would be something like this:
import groovy.json.JsonSlurper
#NonCPS
def sendEmailNewErrors(){
def submitter = findItemInChangelog("changeUser")
def Emails = readJSON file: "D:/Emails.json"
def email = "test#email.com"
def msgList = []
def url = "http://localhost:8080/job/job1/1374/cppcheck/new/api/json"
def json = new JsonSlurper().parseText(new URL(url).text)
for (key in Emails.keySet()) {
if (submitter == key){
email = Emails.get(key)
}
}
json.issues.each{issue->
def msg = "New ERROR found in static analysis, TYPE OF ERROR ${issue.type}"+
", SEVERITY: ${issue.severity}, ERROR MESSAGE: ${issue.message}"+
", FILE ${issue.fileName} AT LINE: ${issue.lineStart}"
msgList.add(msg)
}
msgList.each{msg->
println msg
mail to: email,
subject: "New errors found in job1 build pipeline",
body: "$msg"
}
}
#NonCPS
def findItemInChangelog(item){
def result = "Build ran manually"
def found = false
def file = new XmlSlurper().parse("C:/Users/User/.jenkins/jobs/job1/builds/1374/changelog5966810591791724161.xml")
file.entry.each { entry ->
entry.changenumber.each { changenumber ->
changenumber.children().each { tag ->
if(tag.name() == item && found != true){
result = tag.text()
found = true
}
}
}
}
return result.toString()
}
pipeline {
agent any
stages {
stage("test"){
steps{
script{
sendEmailNewErrors()
}
}
}
}
}
I have tried without CPS notation but I understand if using .each method the notation has to be used. Anyone with more experience with this is able to help?

Unable to Create New file in Jenkins Pipeline

I am trying to create New file in Jenkins Pipeline , by getting error.
error:
java.io.FileNotFoundException: /var/lib/jenkins/workspace/Pipeline-Groovy/test.txt (No such file or directory)
But when i am executing below commands without pipeline , It's created new file
def newFile = new File("/var/lib/jenkins/workspace/test/test.txt")
newFile.append("hello\n")
println newFile.text
If i use same code in Pipeline getting above error
pipeline {
agent any
options {
buildDiscarder(logRotator(numToKeepStr: '5'))
timestamps()
}
stages {
stage('Demo1-stage') {
steps {
deleteDir()
script {
def Jobname = "${JOB_NAME}"
echo Jobname
}
}
}
stage('Demo-2stage') {
steps {
script {
def workspace = "${WORKSPACE}"
echo workspace
def newFile = new File("/var/lib/jenkins/workspace/Pipeline-Groovy/test.txt")
newFile.createNewFile()
sh 'ls -lrt'
}
}
}
}
}
It looks like your folder is not present. Do not give absolute path while creating the file unless it is a requirement. I see that in your case, you need a file in workspace. Always use the ${WORKSPACE} to get the current work directory.
def newFile = new File("${WORKSPACE}/test.txt")
newFile.createNewFile()

Groovy - readYaml() expecting java.util.LinkedHashMap instead of a file

As a part of our Jenkins solutions, we use Groovy in our pipelines.
In one of our groovy file I want to update a docker-stack.yaml.
To do so I'm using readYaml():
stage("Write docker-stack.yaml") {
def dockerStackYamlToWrite = readFile 'docker-stack.yaml'
def dockerStackYaml = readYaml file: "docker-stack.yaml"
def imageOrigin = dockerStackYaml.services[domain].image
def versionSource = imageOrigin.substring(imageOrigin.lastIndexOf(":") + 1, imageOrigin.length())
def imageWithNewVersion = imageOrigin.replace(versionSource, imageTag)
dockerStackYamlToWrite = dockerStackYamlToWrite.replace(imageOrigin, imageWithNewVersion)
sh "rm docker-stack.yaml"
writeFile file: "docker-stack.yaml", text: dockerStackYamlToWrite
sh "git add docker-stack.yaml"
sh "git commit -m 'promote dockerStack to ${envname}'"
sh "git push origin ${envname}"
}
I am using test to validate my code:
import org.junit.Before
import org.junit.Test
class TestUpdateVersionInDockerStack extends JenkinsfileBaseTest {
#Before
void setUp() throws Exception {
helper.registerAllowedMethod("build", [Map.class], null)
helper.registerAllowedMethod("steps", [Object.class], null)
super.setUp()
}
#Test void success() throws Exception {
def script = loadScript("src/test/jenkins/updateVersionInDockerStack/success.jenkins")
script.execute()
}
}
Here is the success.jenkins:
def execute() {
node() {
stage("Build") {
def version = buildVersion()
updateVersionInDockerStack([
DOMAIN : "security-package",
IMAGE_TAG : version,
GITHUB_ORGA : "Bla",
TARGET_ENV : "int"
])
}
}
}
return this
When I run my test I get this message:
groovy.lang.MissingMethodException: No signature of method: updateVersionInDockerStack.readYaml() is applicable for argument types: (java.util.LinkedHashMap) values: [[file:docker-stack.yaml]]
At this point I'm lost. For what I understand from the documentation readYaml() can I a file as an argument.
Can you help to understand why it is expecting a LinkedHashMap? Do you have to convert my value in a LinkedHashMap?
Thank you
Your pipeline unit test fails, because there is no readYaml method registered in pipeline's allowed methods. In your TestUpdateVersionInDockerStack test class simply add to the setUp method following line:
helper.registerAllowedMethod("readYaml", [Map.class], null)
This will instruct Jenkins pipeline unit environment that the method readYaml that accepts a single argument of type Map is allowed to use in the pipeline and invocation of this method will be registered in the unit test result stack. You can add a method printCallStack() call to your test method to see the stack of all executed steps during the test:
#Test void success() throws Exception {
def script = loadScript("src/test/jenkins/updateVersionInDockerStack/success.jenkins")
script.execute()
printCallStack()
}

How do you load a groovy file and execute it

I have a jenkinsfile dropped into the root of my project and would like to pull in a groovy file for my pipeline and execute it. The only way that I've been able to get this to work is to create a separate project and use the fileLoader.fromGit command. I would like to do
def pipeline = load 'groovy-file-name.groovy'
pipeline.pipeline()
If your Jenkinsfile and groovy file in one repository and Jenkinsfile is loaded from SCM you have to do:
Example.Groovy
def exampleMethod() {
//do something
}
def otherExampleMethod() {
//do something else
}
return this
JenkinsFile
node {
def rootDir = pwd()
def exampleModule = load "${rootDir}#script/Example.Groovy "
exampleModule.exampleMethod()
exampleModule.otherExampleMethod()
}
If you have pipeline which loads more than one groovy file and those groovy files also share things among themselves:
JenkinsFile.groovy
def modules = [:]
pipeline {
agent any
stages {
stage('test') {
steps {
script{
modules.first = load "first.groovy"
modules.second = load "second.groovy"
modules.second.init(modules.first)
modules.first.test1()
modules.second.test2()
}
}
}
}
}
first.groovy
def test1(){
//add code for this method
}
def test2(){
//add code for this method
}
return this
second.groovy
import groovy.transform.Field
#Field private First = null
def init(first) {
First = first
}
def test1(){
//add code for this method
}
def test2(){
First.test2()
}
return this
You have to do checkout scm (or some other way of checkouting code from SCM) before doing load.
Thanks #anton and #Krzysztof Krasori, It worked fine if I combined checkout scm and exact source file
Example.Groovy
def exampleMethod() {
println("exampleMethod")
}
def otherExampleMethod() {
println("otherExampleMethod")
}
return this
JenkinsFile
node {
// Git checkout before load source the file
checkout scm
// To know files are checked out or not
sh '''
ls -lhrt
'''
def rootDir = pwd()
println("Current Directory: " + rootDir)
// point to exact source file
def example = load "${rootDir}/Example.Groovy"
example.exampleMethod()
example.otherExampleMethod()
}
Very useful thread, had the same problem, solved following you.
My problem was: Jenkinsfile -> call a first.groovy -> call second.groovy
Here my solution:
Jenkinsfile
node {
checkout scm
//other commands if you have
def runner = load pwd() + '/first.groovy'
runner.whateverMethod(arg1,arg2)
}
first.groovy
def first.groovy(arg1,arg2){
//whatever others commands
def caller = load pwd() + '/second.groovy'
caller.otherMethod(arg1,arg2)
}
NB: args are optional, add them if you have or leave blank.
Hope this could helps further.
In case the methods called on your loaded groovy script come with their own node blocks, you should not call those methods from within the node block loading the script. Otherwise you'd be blocking the outer node for no reason.
So, building on #Shishkin's answer, that could look like
Example.Groovy
def exampleMethod() {
node {
//do something
}
}
def otherExampleMethod() {
node {
//do something else
}
}
return this
Jenkinsfile
def exampleModule
node {
checkout scm // could not get it running w/o checkout scm
exampleModule = load "script/Example.Groovy"
}
exampleModule.exampleMethod()
exampleModule.otherExampleMethod()
Jenkinsfile using readTrusted
When running a recent Jenkins, you will be able to use readTrusted to read a file from the scm containing the Jenkinsfile without running a checkout - or a node block:
def exampleModule = evaluate readTrusted("script/Example.Groovy")
exampleModule.exampleMethod()
exampleModule.otherExampleMethod()

Resources