I'm running a basic pipeline that executes pylint on my repository code.
My Jenkins runs on Debian etch, the Jenkins version is 2.231.
At the end of the pipeline, I'm getting the following error :
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 24
Finished: FAILURE
As this page https://wiki.jenkins.io/display/JENKINS/Job+Exit+Status explains, the error code 24 refers to a "too many open files" error.
If I remove the pylint part of the job, the pipelines executes smoothly.
Il tried to configure the limit in /etc/security/limits.conf
jenkins soft nofile 64000
jenkins hard nofile 64000
and in the Jenkins config file /etc/default/jenkins :
MAXOPENFILES=64000
If I put "ulimit -n" in the pipeline, the configured value is displayed but it has no effect on the result that remains : ERROR: script returned exit code 24.
The problem comes from pylint that doesn't return a 0 code even if it ran successfully.
The solution is to use the --exit-zero option when running pylint.
pylint --exit-zero src
Jenkins with pylint gives build failure
Related
i have an sbt project in jenkins, created the project as jenkins pipeline project, i have installed sbt in jenkins, i have checked the auto install checkbox and select the version number 1.2.8
here is my jenkins file
pipeline {
agent any
stages {
stage('Reload') {
steps {
echo "Reloading..."
//sh "sbt reload"
sh "${tool name: 'sbt1.2.8', type: 'org.jvnet.hudson.plugins.SbtPluginBuilder$SbtInstallation'}/bin/sbt compile"
}
}
}
}
and here is sbt settings in jenkins
here is jenkins console logs
+ /var/lib/jenkins/tools/org.jvnet.hudson.plugins.SbtPluginBuilder_SbtInstallation/sbt1.2.8/bin/sbt compile
[0m[[0m[0minfo[0m] [0m[0mLoading settings for project interpret-backend-project-jenkinsfile-build from plugins.sbt ...[0m
[0m[[0m[0minfo[0m] [0m[0mLoading project definition from /var/lib/jenkins/workspace/interpret-backend-project-jenkinsfile/project[0m
[0m[[0m[0minfo[0m] [0m[0mLoading settings for project interpret-backend-project-jenkinsfile from build.sbt ...[0m
[0m[[0m[0minfo[0m] [0m[0mSet current project to interpret (in build file:/var/lib/jenkins/workspace/interpret-backend-project-jenkinsfile/)[0m
[0m[[0m[0minfo[0m] [0m[0mExecuting in batch mode. For better performance use sbt's shell[0m
[0m[[0m[32msuccess[0m] [0m[0mTotal time: 4 s, completed Nov 25, 2020, 4:53:05 PM[0m
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
[Checks API] No suitable checks publisher found.
Finished: SUCCESS
[0m[[0m[0minfo[0m] [0m[0m why is this displaying how can i fix this ?
These are ANSI color codes. You can either embrace them to have colorful logs or disable them.
To enable colors in Jenkins you can modify your pipeline definition:
pipeline {
...
options {
ansiColor('xterm')
}
...
}
This is a reference to the AnsiColor plugin.
If you can't use this plugin and want to disable colors in sbt logs, you can do that by modifying sbt.color option. For example, by launching sbt with -Dsbt.color=false (I see that you can add this to in the UI) or by adding it to the SBT_OPTS environment variable:
pipeline{
...
environment {
SBT_OPTS = "${SBT_OPTS} -Dsbt.color=false"
}
...
}
Check out sbt docs and take a look at the sbt.ci option as well, it should be automatically set on Jenkins.
I have strange problem when I run a build in container in openshift/OKD i get a strange problem:
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] node
Still waiting to schedule task
‘Jenkins’ doesn’t have label ‘slave’
Agent slave-8n2r5 is provisioned from template Kubernetes Pod Template
Agent specification [Kubernetes Pod Template] (slave):
* [jnlp] docker-registry.default.svc:5000/openshift/jenkins-slave-base- centos7:v3.9
Running on slave-8n2r5 in /tmp/workspace/test_job
[Pipeline] {
[Pipeline] stage (hello)
Using the ‘stage’ step without a block argument is deprecated
Entering stage hello
Proceeding
[Pipeline] echo
dupa
[Pipeline] sh
[test_job] Running shell script
sh: /tmp/workspace/test_job#tmp/durable-bda908b8/script.sh: Permission denied
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 126
Finished: FAILURE
Pipeline:
node('slave') {
stage 'hello'
println('dupa')
sh 'git clone http://pac-app-test-01.raiffeisen.pl:8081/a/cm-devops-okd-example-python'
}
Slave container config:
<org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
<inheritFrom></inheritFrom>
<name>slave</name>
<namespace></namespace>
<privileged>false</privileged>
<capOnlyOnAlivePods>false</capOnlyOnAlivePods>
<alwaysPullImage>false</alwaysPullImage>
<instanceCap>2147483647</instanceCap>
<slaveConnectTimeout>100</slaveConnectTimeout>
<idleMinutes>0</idleMinutes>
<activeDeadlineSeconds>0</activeDeadlineSeconds>
<label>slave</label>
<serviceAccount>jenkins</serviceAccount>
<nodeSelector></nodeSelector>
<nodeUsageMode>NORMAL</nodeUsageMode>
<customWorkspaceVolumeEnabled>false</customWorkspaceVolumeEnabled>
<workspaceVolume class="org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume">
<memory>false</memory>
</workspaceVolume>
<volumes/>
<containers>
<org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
<name>jnlp</name>

<privileged>false</privileged>
<alwaysPullImage>true</alwaysPullImage>
<workingDir>/tmp</workingDir>
<command></command>
<args>${computer.jnlpmac} ${computer.name}</args>
<ttyEnabled>false</ttyEnabled>
<resourceRequestCpu></resourceRequestCpu>
<resourceRequestMemory></resourceRequestMemory>
<resourceLimitCpu></resourceLimitCpu>
<resourceLimitMemory></resourceLimitMemory>
<envVars/>
<ports/>
<livenessProbe>
<execArgs></execArgs>
<timeoutSeconds>0</timeoutSeconds>
<initialDelaySeconds>0</initialDelaySeconds>
<failureThreshold>0</failureThreshold>
<periodSeconds>0</periodSeconds>
<successThreshold>0</successThreshold>
</livenessProbe>
</org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
Both master jenkins container and slave run in the same name space.
I can log to the container and there is no problem with rights. I think is a silly mistake but I cannot find it on my own. Could you help me with this I'm tatally confused?
As my openshift mount volumes emptyDir() with option noexec I found a workaround for this problem in pipeline I'm changing workspace:
node('slave') {
ws('/tmp/test/' + env.JOB_NAME){
println('dupa')
sh 'git clone http://xxxx:8081/a/cm-devops-okd-example-python'
}
I try to invoke the below salt script using Jenkins:
create_script:
file.managed:
- name: /tmp/broc/import_props.sh
- source: salt://projects/broc/jboss/files/import.sh.jinja
- template: jinja
Import_properties:
cmd.script:
- name: /tmp/broc/import.sh
- cwd: /tmp/broc`
The Jenkins console output is:
`ID: create_script
Function: file.managed
Name: /tmp/broc/import.sh
Result: True
Comment: File /tmp/broc/import.sh updated
Started: 11:31:13.736928
Duration: 166.319 ms
Changes:
----------
diff:
New file
mode:
0644
ID: Import_properties
Function: cmd.script
Name: /tmp/broc/import.sh
Result: False
Comment: Command '/tmp/broc/import.sh' run
Started: 11:31:13.903378
Duration: 399.825 ms
Changes:
----------
pid:
8292
retcode:
1`
And Jenkins build finished success:
`Succeeded: 21 (changed=22)
Failed: 1
Total states run: 22
Total run time: 30.338 s"}}]
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS`
My question is a saltstack job ID Import_properties Result is False. So a Jenkins build should also finish as FAILURE. In the above case the saltstack result is ignored and build finished SUCCESS. Is there a way the jenkins build be made FAILURE based on saltstack Result?
I see the below Jenkins pipeline:
`try{
saltCmd = "\"salt -E \"($target)\" \ state.apply projects.alip.process-server \
pillar=\'{\"region\": \"${Region}\",\"siteid\":\"${SiteID}\",\"dbuser\":\"${DBUSER}\",\"dbpass\":\"${DBPASS}\"}\' \""
result = salt authtype: 'pam',
clientInterface: local(
arguments: saltCmd,
blockbuild: true,
function: 'cmd.run',
target: "$my_salttarget",
saveFile: true,
targettype: 'glob'),
credentialsId: "$my_saltcred",
servername: "$my_saltserver"
}
}catch(e){
result = e.toString()
currentBuild.result = 'FAILURE'
}finally{
echo result.replace("\\n",'\n')
}
}`
I am new to Jenkins pipeline script, can you help suggest inputs for adding a post build steps under finally to parse the Jenkins console output, identify a string and if it matches mark the build failure. This is similar to a text finder plugin except that we write a pipeline script.
Even if your state failed, Jenkins believe the salt command it evoked still ran successfully.
Jenkins is not able to detect the errors inside salt itself.
It can only tell if salt runs successfully or not.
This is the same as when you ran salt command line. Even if the state fail, the shell command salt still returns 0.
Your issue here is (as said by others) due to jenkins using the return code if the salt command itself and NOT the return code of the action taken by the state you applied.
My 2 cents here is the
--retcode-passthrough
option you can pass to your salt command.
This option allows the salt command return code to match the action taken by the state.
Simply put, if anything fails in a state then the salt command will return a failure return code.
Official doc here
When I run a build on my jenkins it always prints the pipeline while moving through the jenkins steps. The current output of the console:
Build context: CI
:clean
:app:clean
BUILD SUCCESSFUL in 22s
2 actionable tasks: 2 executed
[Pipeline] script
[Pipeline] {
[Pipeline] properties
[Pipeline] sh
I want to see only the actual executions I've declared. The console ouput should look like
Build context: CI
:clean
:app:clean
BUILD SUCCESSFUL in 22s
2 actionable tasks: 2 executed
I'm trying to push my artifacts to Artifactory with Jenkins Pipeline, which call Gradle tool.
I am following the examples published on GitHub:
Example1
Example2
My Jenkins Pipeline script:
stage('Perform Gradle Release') {
//ssh-agent required to perform GIT push (when tagging the branch on release)
sshagent([git_credential]) {
sh "./gradlew clean release unSnapshotVersion -Prelease.useAutomaticVersion=true -Prelease.releaseVersion=${release_version} -Prelease.newVersion=${development_version}"
}
// Create an Artifactory server instance
def server = Artifactory.server('my-artifactory')
// Create and set an Artifactory Gradle Build instance:
def rtGradle = Artifactory.newGradleBuild()
rtGradle.resolver server: server, repo: 'libs-release'
rtGradle.deployer server: server, repo: 'libs-release-local'
//Use Gradle Wrapper
rtGradle.useWrapper = true
//Creates buildinfo
def buildInfo = Artifactory.newBuildInfo()
buildInfo.env.capture = true
buildInfo.env.filter.addInclude("*")
// Run Gradle:
rtGradle.run rootDir: "./", buildFile: 'build.gradle', tasks: 'clean artifactoryPublish', buildInfo: buildInfo
// Publish the build-info to Artifactory:
server.publishBuildInfo buildInfo
}
My Gradle file is very light, I'm just using the plugin Gradle Release Plugin to perform gradle release.
When executing the pipeline, it fails with this message:
:artifactoryPublish
BUILD SUCCESSFUL
Total time: 17.451 secs
ERROR: Couldn't read generated build info at : /tmp/generated.build.info4898776990575217114.json
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
hudson.model.Run$RunnerAbortedException
at org.jfrog.hudson.pipeline.Utils.getGeneratedBuildInfo(Utils.java:188)
at org.jfrog.hudson.pipeline.steps.ArtifactoryGradleBuild$Execution.run(ArtifactoryGradleBuild.java:127)
at org.jfrog.hudson.pipeline.steps.ArtifactoryGradleBuild$Execution.run(ArtifactoryGradleBuild.java:96)
at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousStepExecution.start(AbstractSynchronousStepExecution.java:40)
...
Finished: FAILURE
When I check on the server, there is no such file /tmp/generated.build.info4898776990575217114.json (the user has of course permission to write to /tmp).
Thanks for your help.
[EDIT] It is weird, but I found some files named "buildInfo2408849984051060030.properties", containing the informations. The name is not the same, neither the format, and these files are stores on my Jenkins machine, not my slave executing the pipeline.
Thanks #tamir-hadad, it has indeed been fixed on 2.8.2.