I'm in the process of translating Jenkins 2 freestyle jobs to work as pipeline jobs with Groovy, which I have very little experience in. I can't for the life of me figure out how to get the arguments to run inside of Groovy. Here's the important bit of the script;
stage ('Clean') {
try {
notifyBuild('STARTED')
dir("cloudformation") {
def list = sh(script: "ls -1 *.template", returnStdout: true)
for (i in list) {
sh "aws s3 cp $i s3://URLHERE —expires 1 —cache-control 1"
}
} } catch (e) {
// If there was an exception thrown, the build failed
currentBuild.result = "FAILED"
throw e
} finally {
// Success or failure, always send notifications
notifyBuild(currentBuild.result)
} }
The relevant bit is sh "aws s3 cp $i s3://URLHERE —expires 1 —cache-control 1". Attempting to run this returns the following error;
[cloudformation] Running shell script
+ aws s3 cp e s3://URLHERE —expires 1 —cache-control 1
Unknown options: —expires,1,—cache-control,1
Google has produced little in the way of shell scripts with arguments inside of Groovy. Obviously it's trying to deal with each space-delineated chunk as its own bit; how do I stop that behavior?
Edited to add:
I have tried sh "aws s3 cp $i s3://URLHERE '—expires 1' '—cache-control 1'" which then returns the same error but with Unknown options: —expires 1,—cache-control 1 so I get that I can include spaces by quoting appropriately, but that still leaves the underlying issue.
The cache-control parameter needs 2 dashes --cache-control <value>, as well as the expires parameter.
See the S3 documentation of cp.
Related
This is a concern as we have an executable which returns 2 as warning. We do not want to fail the Jenkins build pipeline just because of this. How can we modify the pipeline to accept an exit code 2, and prints out a reasonable warning message based on the exit code?
D:\Stage>c:\bin\mycommand
script returned exit code 2
When you run sh or bat in a Jenkins pipeline it will always fail the build (and throw an exception) for any non zero exit code - and that cannot be changed.
What you can do is use the returnStatus of the sh step (or cmd) which will return the exit code of the script instead of failing the build, and then you can use something like:
pipeline {
agent any
stages {
stage('Run Script') {
steps {
script {
def exitCode = sh script: 'mycommand', returnStatus: true
if (exitCode == 2) {
// do something
}
else if (exitCode){
// other non-zero exit codes
}
else {
// exit code 0
}
}
}
}
}
}
The only drawback of this approach is that returnStatus cannot be used together with returnStdout, so if you need to get the returned output you will need to get it in another way (write to file and then read it for example).
I want to get a line from a file in my workspace. I am using this script :
stage('Test') {
steps {
script {
outputJenkins = 'output-jenkins.log'
sh "cd invoker && mvn clean install && mvn exec:java -Dexec.mainClass=\"com.JenkinsRunner\" -Dexec.args=\"qal ${GIT_COMMIT_HASH}\" > ../${outputJenkins}"
logFile = readFile(outputJenkins)
echo logFile
adminRepoLogLine = sh "echo logFile | grep \"Admin repo url is :::\""
echo adminRepoLogLine
}
}
}
But I am getting this error:
+ echo logFile
+ grep Admin repo url is :::
script returned exit code 1
The script works fine in my shell when I try it locally. Are there any contains around doing it in a JenkinsFile?
If we apply various fixes and improvements to the code in the question to achieve the desired functionality, then it will succeed:
stage('Test') {
steps {
script {
dir('invoker') {
sh(label: 'Maven Clean Install', script: 'mvn clean install')
// assign maven output to variable
String output = sh(label: 'Maven Git Log', script: "mvn exec:java -Dexec.mainClass=\"com.JenkinsRunner\" -Dexec.args=\"qal ${GIT_COMMIT_HASH}\"", returnStdout: true)
}
// assign regex return to variable
def adminRepoLogLine = output =~ /(.*Admin repo url is :::.*)/
// print extracted string from return
print adminRepoLogLine[0][1]
}
}
}
Note that GIT_COMMIT_HASH is neither an intrinsic Jenkins environment variable, nor defined in the pipeline code in the question, so it will need to be defined at Pipeline scope elsewhere in your code.
This is because the string literal logFile does not contain the string Admin repo url is :::. If there's no such match, then grep will exit with status 1.
You probably want to use
cat logFile | grep \"Admin repo url is :::\"
instead, or, even simpler:
grep \"Admin repo url is :::\" logFile
Append || true (or ||:) to the command if you want to avoid the errors when the log line does not appear.
In my Jenkinsfile I want to dynamically find the unity version using a python script like so:
environment {
UNITY_EDITOR = bat(script: "py $WORKSPACE/get_versions.py --unity", returnStdout: true).trim()
UNITY_BASE = "C:/Program Files/Unity/Hub/Editor/$UNITY_EDITOR/Editor/Unity.exe"
UNITY_WRAPPER = "UnityBatchWrapper -silent-crashes -no-dialogs -batchmode -quit -unityPath \"$UNITY_BASE\""
}
post {
always {
script {
echo "Returning license"
licenseReturnStatus = bat (
script: "$UNITY_WRAPPER -returnlicense",
returnStatus: true
) == 0
}
}
From other stackoverflow answers this seems like it should work, but instead my Jenkins job errors out during the post-build step because $UNITY_WRAPPER isn't defined:
groovy.lang.MissingPropertyException: No such property: UNITY_WRAPPER for class: groovy.lang.Binding
I'm thinking the batch step is what's failing, even though Jenkins doesn't complain about it. I've also tried using $env.WORKSPACE and %WORKSPACE% and that doesn't work either.
I'm beginning to think $WORKSPACE doesn't exist til after the environments step...
Turns out I didn't have Python installed since it was an ephemeral GCP builder and I hadn't updated the node label yet.
For anyone reading this that has trouble with bat commands - be sure to put an # sign in front of your command like "#py ..." or else the command will be echoed in the output. Also trim your output so it doesn't have CRLF in it.
I want to run an external shell command (for example, git clone) inside a Jenkins pipeline.
I have found 2 ways of doing this.
This one works:
steps {
sh "git clone --branch $BRANCH --depth 1 --no-single-branch $REMOTE $LOCAL
}
Downsides:
I only see the output when the complete command is finished. Which is annoying if the command takes a long time.
I need to do some Groovy scripting to look up values in a Map, based on parameters chosen by the user who starts the build. Haven't found a way to do that without a script {} block.
A variation is to run a Bash script that runs the git clone command, that also works. Which will get me into trouble when running on Windows nodes.
The next one errors on
fatal: could not create work tree dir 'localFolder'.: Permission denied
steps {
script {
def localFolder = new File(products[params.PRODUCT].local)
if (!localFolder.exists()) {
def gitCommand = 'git clone --branch ' + params.BRANCH + ' --depth 1 --no-single-branch ' + products[params.PRODUCT].remote + ' ' + localFolder
runCommand(gitCommand)
}
}
}
This is the runCommand() wrapper:
def runCommand = { strList ->
assert ( strList instanceof String ||
( strList instanceof List && strList.each{ it instanceof String } ) \
)
def proc = strList.execute()
proc.in.eachLine { line -> println line }
proc.out.close()
proc.waitFor()
print "[INFO] ( "
if(strList instanceof List) {
strList.each { print "${it} " }
} else {
print strList
}
println " )"
if (proc.exitValue()) {
println "gave the following error: "
println "[ERROR] ${proc.getErrorStream()}"
}
assert !proc.exitValue()
}
My question is: how come I have permission to create directories when running a sh command, and how come I don't have that permission when I do the same thing inside a script {} block with .execute()?
I'm intentionally using the example of the git clone command to avoid getting answers that don't read the question, like using a dir {} block. If I can't create the git directory, then I can also not create the files inside that directory.
If you want to run any shell commands, use sh step, not Groovy's process execution. There is one main reason for that - any Groovy code you execute inside the script block, gets executed on the master node. And this is (probably) the reason you see this permission denied issue. The sh step executes on the expected node and thus creates a workspace there. And when you execute a Groovy code that is designed to create a folder in your workspace, it fails, because there is no workspace on a master node.
"1. Except for the steps themselves, all of the Pipeline logic, the Groovy conditionals, loops, etc execute on the master. Whether simple or complex! Even inside a node block!"
Source: https://jenkins.io/blog/2017/02/01/pipeline-scalability-best-practice/#fundamentals
However, there is a solution to that. You can easily combine the sh step with the script block. There is no issue with using any of the existing Jenkins pipeline steps inside the script block. Consider the following example:
steps {
script {
def localFolder = products[params.PRODUCT].local
if (!fileExists(file: localFolder)) {
sh 'git clone --branch ' + params.BRANCH + ' --depth 1 --no-single-branch ' + products[params.PRODUCT].remote + ' ' + localFolder
}
}
}
Keep in mind that this example uses fileExists and readFile steps to check if file exists in the workspace, as well as to read its content. Using new File(...) won't work correctly when your workspace is shared between master and slave nodes.
If you want to safely create files in the workspace(s), use writeFile step to make sure that the file is created on the node that executes your pipeline's current stage.
A solution to my problem:
Don't bother with showing output as a command progresses, just deal with it that I will only see it at the end.
Compose the entire command inside a script {} block.
put a sh statement inside the script {} block.
Like this:
steps {
script {
def localFolder = new File(products[params.PRODUCT].local)
if (!localFolder.exists()) {
def gitCommand = 'git clone --branch ' + params.BRANCH + ' --depth 1 --no-single-branch ' + products[params.PRODUCT].remote + ' ' + localFolder
sh gitCommand
}
}
}
This still doesn't answer my question about the permission issue, I would still like to know the root cause.
Today I faced an issue when was writing pipeline scenario. Look at the part of the script:
stage("test-stage") {
steps {
script {
def srcFile = "test.txt"
def dstFile ="test.txt.gz"
sh "gzip ${srcFile} > ${dstFile}"
}
}
}
As result executed just a part of command before '>' (redirect output) sign: +gzip test.txt. How it is processing this symbol and how to figure out this issue? Any help appreciated.
It's a bit confusing that Jenkins doesn't log the whole command. But it's the way that you invoke gzip that is causing the problem. You could use this instead:
sh "cat ${srcFile} | gzip > ${dstFile}"