I am using jenkins to run pipeline groovy scripts. One of the first step is a checkout via the checkout plugin. The checkout happens into the <workspace>/source-repo folder.
Now, when I do lsof (it is a linux machine) I get a lot of open file handlers like this:
java 16932 1000 567r REG 202,80 91 7996215 <workspace>/source-repo#tmp/durable-a06b8b8d/output.txt (deleted)
They are building up over time ... Why? And what can i do?
I found the problem, it seems to be related to sh in combination with returnStdout: true. So I replaced calls like this:
def ret = sh script: "command", returnStdout: true
with
sh "command > output.txt"
def ret = readFile "output.txt"
sh "rm output.txt"
Feels a bit hacky, but now I am fine.
Seems to be fixed in the Durable Task Plugin since Version 1.14 (Jun 15 2017)
https://issues.jenkins-ci.org/browse/JENKINS-43639
Related
I would like to activate a specific set of tools via SDKMAN in my scripted Jenkinsfile pipeline.
Basically I would like to do
node("sdkman && mvn386") {
sh "sdk use maven 3.8.6"
sh "sdk current" // << back to whatever was before
sh "mvn --version"
sh "mvn clean install"
}
But since sdk ... is executed in a subshell the use is not active in the next like.
(It would work with sdk default ... but that is out of the question, because I do not want to destroy other running pipelines)
Semantically it would be something like this I need (which is not supported, I think):
node("sdkman && mvn386") {
sh ("sdk use maven 3.8.6") { // << unsupported syntax
sh "sdk current"
sh "mvn --version"
sh "mvn clean install"
}
}
How can I keep the changes that sdk makes to the shell env to the next lines? Is there another tool to use instead of sh? Something like withEnv?
Side problem: Because sdk is just a shell function I needed to create a wrapper script sdh.sh (containing sdk $*). That of course defies the "chaning the environment", too. sh "sdk ..." just does not seem to work with shell functions. Any way around that?
Something I do not want to do is putting the script lines into one block, because that would not work well with our more complicated pipelines.
sh '''
sdk use maven 3.8.6
sdk current
''' // << I dont want to do this in one shell block
I've got a Jenkins pipeline containing stages for source loading, building and deploying on a remote machine through SSH. The problem is about the last one. I saved the script of the following template on the remote server:
#!/bin/bash
bash /<pathTo>/jboss-cli.sh --command="deploy /<anotherPath>/service.war --force"
It works fine if executed in a terminal connected to the remote server.
The best outcome I've received through Jenkins is
/<pathTo>/jboss-cli.sh: line 87: usr/bin/java/bin/java: No such file or directory
in Jenkins console output.
Tried switching between bash and sh, exporting path to java in the pipeline script etc.
Any suggestions are appreciated.
Thanks!
p.s. Execution call from Jenkins looks like:
sh """
ssh -o StrictHostKeyChecking=no $connectionName 'bash /<pathToTheScript>/<scriptName>.sh'
"""
line 87: **usr/bin/java/bin/java**: No such file or directory
As per error line it is considering path from usr not /usr. Can you check if this is what the problem is?
Sorry, I know this should be in comments section but I don't have right to add comments yet.
My Jenkins runs inside Tomcat which runs under user buildman, therefore all the jobs run under that user (in CentOS). Some of my jobs depend on environment variables, which I set in .bashrc. However, the Jenkins jobs do not see any of the variables set in that file even though that script is supposed to be sourced for non-login shells, such as what I would think Jenkins should be (?).
The workaround is simple: I just copy and paste all the variables from my .bashrc into the build command script in Jenkins. But that is not very elegant. Is there any way to get Jenkins to source the .bashrc of the user it runs under so that it gets its usual configuration without having to set it separately in each job?
Jenkins creates a temporary sh-script for each script section (at least when using a "classical" project - for the Pipeline approach I'm not sure). This temporary script is executed with sh. On most Linux systems this is a symlink to bash, this SO-post gives some insights.
Also according to the man-pages of bash invoking bash with sh "tries to mimic the startup behavior of historical versions of sh as closely as possible, while conforming to the POSIX standard as well."
This means the .bashrc is not interpreted at all. However you can try to source the bashrc for each shell-invocation...
So, I tried a few things and the only solutions that seems to work are:
have a shell script in your repo that uses bash
write a file, chmod it via sh and then run it
In both case, there needs to be an executable file with content like:
#!/usr/bin/env bash
...
Using sh """ bash -c "...." """" doesn't seem to work
When my Jenkins agent launches by SSH on redhat linux, I see it does print environment variables defined in .bashrc.
My problem with not seeing changes to .bashrc was because I needed to relaunch the agent, so it picked up the change.
I have found a command that works for me
In .profile, .bashrc, etc.:
export MY_BASH_VAR=123
In Execute Shell:
VAR=$(bash -c "source ~/.profile && echo \$MY_BASH_VAR")
echo $VAR
Will print 123 in the output console when the job builds
We are porting our Jenkins pipeline to work on Windows environments.
The Jenkins' master connects to our Windows remote -named winremote- using Cygwin sshd.
As described on this page, the Remote root directory of the node is given as a plain Windows path (in this case, it is set to C:\cygwin64\home\jenkins\jenkins-slave-dir)
This minimal pipeline example:
node("winremote")
{
echo "Entering Windows remote"
sh "ls -l"
}
fails with the error:
[Pipeline] echo
Entering Windows rmeote
[Pipeline] sh
[C:\cygwin64\home\jenkins\jenkins-slave-dir\workspace\proto-platforms] Running shell script
sh: C:\cygwin64\home\jenkins\jenkins-slave-dir\workspace\proto-platforms#tmp\durable-a739272f\script.sh: command not found
SSHing into the Windows remote, I was able to see that Jenkins actually created workspace subdirectory in C:\cygwin64\home\jenkins\jenkins-slave-dir, but it is left empty.
Is there a known way to use the sh pipeline step on such a remote ?
A PR from blatinville, that was merged a few hours after this question, solves this first issue.
Sadly, it introduces another problem, described in the ticket JENKINS-41225, with the error:
nohup: failed to run command 'sh': No such file or directory
There is a proposed PR for a quickfix of this issue.
Then there is a last problem with how the durable-task-plugin evaluate if a task is still alive using 'ps', with another PR fixing it.
Temporary solution
Until those (or equivalent) fixes are applied, one could compile a Cygwin compatible durable-task-plugin with the following commands:
git clone https://github.com/Adnn/durable-task-plugin.git -b cygwin_fixes
cd durable-task-plugin/
mvn clean install -DskipTests
Which notably generates target/durable-task.hpi file, which can be used to replace the durable-task.jpi file as installed by Jenkins in its plugins folder. It is then required to restart Jenkins.
We have a Jenkins job running on a Jenkins server instance A. The current build number for this job is say 58.
We are migrating this Jenkins job to a new Jenkins server - B. However, there is a need to retain the build number - 58 from the previous server in this new Jenkins instance B.
Is this possible? If yes, how?
Thank you
If you only intend to keep the build number intact for job the in the new Jenkins Server, you could achieve it simply by writing a script that will populate the nextBuildNumber file in $JENKINS_HOME/jobs/<job_name>/ with the appropriate #buildnumber that you wish to have.
Something like this (script.sh) :-
#!bin/bash -x
JENKINS_HOME=/var/lib/jenkins
mkdir -p $JENKINS_HOME/jobs/<new_job> && cp $JENKINS_HOME/jobs/<old_job>/* $JENKINS_HOME/jobs/<new_job>/
OLD_BUILD_NO=`cat $JENKINS_HOME/jobs/seed/nextBuildNumber`
NEW_BUILD_NO=`expr $OLD_BUILD_NO - 1`
echo $NEW_BUILD_NO > $JENKINS_HOME/jobs/<new_job>/nextBuildNumber
chown -R jenkins:jenkins $JENKINS_HOME/jobs/temp/
Now run this script as:-
sudo bash script.sh
Although it creates the required job in the same jenkins server instance, the basic idea is same ..to populate the nextBuildNumber file.
The accepted answer to modify the nextBuildNumber File sadly didn't work for me, but found this answer by jayan in another Stackoverflow question:
https://stackoverflow.com/a/34951963
Try running below script in Jenkins Script Console.. Change "workFlow" to your
Jobname
def job = Jenkins.instance.getItem("workFlow")
job.nextBuildNumber = 10
job.saveNextBuildNumber()