Getting a list of files from jenkins workspace using groovy fails on one node but it works on another.
Here is the code in the pipeline:
def pd = pwd()
bat "dir $pd"
def bat_files = new FileNameFinder().getFileNames(pd, 'G*.bat')
Output:
C:\Jenkins\Slave\workspace\TestFolder\CodeTestPipe>dir C:\Jenkins\Slave\workspace\TestFolder\CodeTestPipe
Volume in drive C is OSDisk
Volume Serial Number is AAA1-73FA
Directory of C:\Jenkins\Slave\workspace\TestFolder\CodeTestPipe
01/23/2017 05:34 PM <DIR> .
01/23/2017 05:34 PM <DIR> ..
01/23/2017 05:34 PM 4 GOL.bat
1 File(s) 4 bytes
2 Dir(s) 134,906,617,856 bytes free
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
C:\Jenkins\Slave\workspace\TestFolder\CodeTestPipe does not exist.
at org.apache.tools.ant.types.AbstractFileSet.getDirectoryScanner(AbstractFileSet.java:483)
at org.codehaus.groovy.ant.FileIterator.setNextObject(FileIterator.java:119)
at org.codehaus.groovy.ant.FileIterator.hasNext(FileIterator.java:81)
at groovy.util.FileNameFinder.getFileNames(FileNameFinder.groovy:44)
at groovy.util.FileNameFinder$getFileNames.callCurrent(Unknown Source)
at groovy.util.FileNameFinder.getFileNames(FileNameFinder.groovy:31)
at
Pipelines are executed on the Jenkins master, and only through the magic of remoting-enabled APIs do things happen on the selected node. So File, and everything using File, doesn't work, and never will: It always executes on master.
Source: https://groups.google.com/forum/#!topic/jenkinsci-users/yBiYbwWjg-I
I was able to get the files by using the dir in bat command:
def bat_out = bat( returnStdout: true, script: '#echo off & dir /b G*.bat').trim()
Related
I'm running a basic pipeline that executes pylint on my repository code.
My Jenkins runs on Debian etch, the Jenkins version is 2.231.
At the end of the pipeline, I'm getting the following error :
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 24
Finished: FAILURE
As this page https://wiki.jenkins.io/display/JENKINS/Job+Exit+Status explains, the error code 24 refers to a "too many open files" error.
If I remove the pylint part of the job, the pipelines executes smoothly.
Il tried to configure the limit in /etc/security/limits.conf
jenkins soft nofile 64000
jenkins hard nofile 64000
and in the Jenkins config file /etc/default/jenkins :
MAXOPENFILES=64000
If I put "ulimit -n" in the pipeline, the configured value is displayed but it has no effect on the result that remains : ERROR: script returned exit code 24.
The problem comes from pylint that doesn't return a 0 code even if it ran successfully.
The solution is to use the --exit-zero option when running pylint.
pylint --exit-zero src
Jenkins with pylint gives build failure
I tried to analyze code with sonar-scanner which is installed on our Jenkins server with ant-build.
And every time of analyzing we download 'sonarqube-ant-task-2.7.0.1612.jar' from our Nexus repository.
cause we work with closed network.
But there is a problem.
Even the sonarqube-ant-task-2.7.0.1612.jar was dawnloaded(workspace/modules) and I checked it up with ls -al command,
+ curl -u ****:**** http://nexus.adpaas.cloud.samsungds.net/repository/VM/lib/sonarqube-ant-task-2.7.0.1612.jar --output sonarqube-ant-task-2.7.0.1612.jar
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 618k 100 618k 0 0 14.8M 0 --:--:-- --:--:-- --:--:-- 15.1M
+ mv ./sonarqube-ant-task-2.7.0.1612.jar ./workspace/modules
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] echo
Location : /home/jenkins/workspace/portal-space185041/ssd/dev/fw5channel-sonar
[Pipeline] }
[Pipeline] // script
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Build)
[Pipeline] script
[Pipeline] {
[Pipeline] echo
Build : 20201012.135547
[Pipeline] sh
> git rev-list --no-walk 0af1889ccfbd347f427d3a1d5d15c9a33886e6be # timeout=10
+ ls -al workspace/modules
total 632
drwxr-xr-x 2 jenkins jenkins 152 Oct 12 13:55 .
drwxr-xr-x 3 jenkins jenkins 152 Oct 12 13:55 ..
-rw-r--r-- 1 jenkins jenkins 633825 Oct 12 13:55 sonarqube-ant-task-2.7.0.1612.jar
But, still the analysis use default sonar library located in Jenkins Image like below.
[sonar:sonar] Apache Ant(TM) version 1.10.5 compiled on July 10 2018
[sonar:sonar] SonarQube Ant Task version: 2.2
[sonar:sonar] Loaded from: file:/usr/share/ant/ant-1.10.5/lib/sonar-ant-task-2.2.jar
I also set up the build.xml for Sonar analysis.
<target name="sonar" depends = "init, resources-dev, compile">
<taskdef uri="antlib:org.sonar.ant" resource="org/sonar/ant/antlib.xml">
<!-- Update the following line, or put the "sonarqube-ant-task-*.jar" file in your "$HOME/.ant/lib" folder -->
<classpath path ="./workspace/modules/sonarqube-ant-task-2.7.0.1612.jar.jar"/>
</taskdef>
<sonar:sonar/>
with the sonarqube official annotation above( Update the following line, or put the "sonarqube-ant-task-*.jar" file in your "$HOME/.ant/lib" folder ), I guess that the latter is valid when I don't set the the former 'classpath' option. isnt it?
although it works well, no errors, I have to make a template. so I need to know exact reason of it.
Anyone knows the reason? Thanks a lot in advance.
When I run a build on my jenkins it always prints the pipeline while moving through the jenkins steps. The current output of the console:
Build context: CI
:clean
:app:clean
BUILD SUCCESSFUL in 22s
2 actionable tasks: 2 executed
[Pipeline] script
[Pipeline] {
[Pipeline] properties
[Pipeline] sh
I want to see only the actual executions I've declared. The console ouput should look like
Build context: CI
:clean
:app:clean
BUILD SUCCESSFUL in 22s
2 actionable tasks: 2 executed
How is it possible to export a variable from the sh context of sh context to the groovy context of the jenkins pipline job?
Pipeline Code:
node {
echo 'Hello World'
sh 'export VERSION="v$(date)"'
echo env.VERSION
}
outupt:
[Pipeline] sh
[test-pipeline] Running shell script
++ date
+ export 'VERSION=vThu Dec 1 12:14:40 CET 2016'
+ VERSION='vThu Dec 1 12:14:40 CET 2016'
[Pipeline] echo`enter code here`
null
i am using Jenkins ver. 2.34
update:
there is the possibility to write the variable to a temporary file and read it later. This looks totally like a hack to me. It is not "thread-safe" by default when using parallel builds and does not scale if you need to export multiple variable in one run. Is there a proper way to do this?
I hope this will help.
node('master'){
stage('stage1'){
def commit = sh (returnStdout: true, script: '''echo hi
echo bye | grep -o "e"
date
echo lol''').split()
echo "${commit[-1]} "
}
}
I'm trying to push my artifacts to Artifactory with Jenkins Pipeline, which call Gradle tool.
I am following the examples published on GitHub:
Example1
Example2
My Jenkins Pipeline script:
stage('Perform Gradle Release') {
//ssh-agent required to perform GIT push (when tagging the branch on release)
sshagent([git_credential]) {
sh "./gradlew clean release unSnapshotVersion -Prelease.useAutomaticVersion=true -Prelease.releaseVersion=${release_version} -Prelease.newVersion=${development_version}"
}
// Create an Artifactory server instance
def server = Artifactory.server('my-artifactory')
// Create and set an Artifactory Gradle Build instance:
def rtGradle = Artifactory.newGradleBuild()
rtGradle.resolver server: server, repo: 'libs-release'
rtGradle.deployer server: server, repo: 'libs-release-local'
//Use Gradle Wrapper
rtGradle.useWrapper = true
//Creates buildinfo
def buildInfo = Artifactory.newBuildInfo()
buildInfo.env.capture = true
buildInfo.env.filter.addInclude("*")
// Run Gradle:
rtGradle.run rootDir: "./", buildFile: 'build.gradle', tasks: 'clean artifactoryPublish', buildInfo: buildInfo
// Publish the build-info to Artifactory:
server.publishBuildInfo buildInfo
}
My Gradle file is very light, I'm just using the plugin Gradle Release Plugin to perform gradle release.
When executing the pipeline, it fails with this message:
:artifactoryPublish
BUILD SUCCESSFUL
Total time: 17.451 secs
ERROR: Couldn't read generated build info at : /tmp/generated.build.info4898776990575217114.json
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
hudson.model.Run$RunnerAbortedException
at org.jfrog.hudson.pipeline.Utils.getGeneratedBuildInfo(Utils.java:188)
at org.jfrog.hudson.pipeline.steps.ArtifactoryGradleBuild$Execution.run(ArtifactoryGradleBuild.java:127)
at org.jfrog.hudson.pipeline.steps.ArtifactoryGradleBuild$Execution.run(ArtifactoryGradleBuild.java:96)
at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousStepExecution.start(AbstractSynchronousStepExecution.java:40)
...
Finished: FAILURE
When I check on the server, there is no such file /tmp/generated.build.info4898776990575217114.json (the user has of course permission to write to /tmp).
Thanks for your help.
[EDIT] It is weird, but I found some files named "buildInfo2408849984051060030.properties", containing the informations. The name is not the same, neither the format, and these files are stores on my Jenkins machine, not my slave executing the pipeline.
Thanks #tamir-hadad, it has indeed been fixed on 2.8.2.