Suppress Pipeline Output in Jenkins file - jenkins

When I run a build on my jenkins it always prints the pipeline while moving through the jenkins steps. The current output of the console:
Build context: CI
:clean
:app:clean
BUILD SUCCESSFUL in 22s
2 actionable tasks: 2 executed
[Pipeline] script
[Pipeline] {
[Pipeline] properties
[Pipeline] sh
I want to see only the actual executions I've declared. The console ouput should look like
Build context: CI
:clean
:app:clean
BUILD SUCCESSFUL in 22s
2 actionable tasks: 2 executed

Related

Jenkins pipeline not respecting a lock on a stage?

Jenkins version: 2.319.3
Lockable Resources plugin version: 2.18
I have a simple Jenkins pipeline that runs based on a webhook. This pipeline runs very frequently, so we have a lock on one stage so that it won't run on multiple builds at the same time. The problem is, we noticed that infrequently, multiple builds will run at the same time, despite the lock. Here's the code:
node("Windows_Server") {
properties([disableConcurrentBuilds()])
stage("Setup") {
some stuff...
} // end 'Setup' stage
stage("Stage 1") {
lock("Stage 1") {
withCredentials(some stuff...)
} // end lock
} // end stage
} // end node
Both builds ran at almost the same time (maybe 0.5 seconds apart), and it appears that the first build didn't set a lock:
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Stage 1)
[Pipeline] withCredentials
some stuff...
This is the second build:
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Stage 1)
[Pipeline] lock
Trying to acquire lock on [Stage 1]
Resource [Stage 1] did not exist. Created.
Lock acquired on [Stage 1]
[Pipeline] {
[Pipeline] withCredentials
some stuff...
Then in Blue Ocean, the output shows that the stage for those builds ran within a few milliseconds of each other; one started at [2023-01-25T21:39:34.917Z], the other at [2023-01-25T21:39:35.040Z].
So my question is, why would the first one not put a lock on that stage, or if it did why did the second one not see the lock?

Why does my Jenkins pipeline fail with Error code 24

I'm running a basic pipeline that executes pylint on my repository code.
My Jenkins runs on Debian etch, the Jenkins version is 2.231.
At the end of the pipeline, I'm getting the following error :
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 24
Finished: FAILURE
As this page https://wiki.jenkins.io/display/JENKINS/Job+Exit+Status explains, the error code 24 refers to a "too many open files" error.
If I remove the pylint part of the job, the pipelines executes smoothly.
Il tried to configure the limit in /etc/security/limits.conf
jenkins soft nofile 64000
jenkins hard nofile 64000
and in the Jenkins config file /etc/default/jenkins :
MAXOPENFILES=64000
If I put "ulimit -n" in the pipeline, the configured value is displayed but it has no effect on the result that remains : ERROR: script returned exit code 24.
The problem comes from pylint that doesn't return a 0 code even if it ran successfully.
The solution is to use the --exit-zero option when running pylint.
pylint --exit-zero src
Jenkins with pylint gives build failure

OKD 3.9 jenkins slave Permission denied

I have strange problem when I run a build in container in openshift/OKD i get a strange problem:
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] node
Still waiting to schedule task
‘Jenkins’ doesn’t have label ‘slave’
Agent slave-8n2r5 is provisioned from template Kubernetes Pod Template
Agent specification [Kubernetes Pod Template] (slave):
* [jnlp] docker-registry.default.svc:5000/openshift/jenkins-slave-base- centos7:v3.9
Running on slave-8n2r5 in /tmp/workspace/test_job
[Pipeline] {
[Pipeline] stage (hello)
Using the ‘stage’ step without a block argument is deprecated
Entering stage hello
Proceeding
[Pipeline] echo
dupa
[Pipeline] sh
[test_job] Running shell script
sh: /tmp/workspace/test_job#tmp/durable-bda908b8/script.sh: Permission denied
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 126
Finished: FAILURE
Pipeline:
node('slave') {
stage 'hello'
println('dupa')
sh 'git clone http://pac-app-test-01.raiffeisen.pl:8081/a/cm-devops-okd-example-python'
}
Slave container config:
<org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
<inheritFrom></inheritFrom>
<name>slave</name>
<namespace></namespace>
<privileged>false</privileged>
<capOnlyOnAlivePods>false</capOnlyOnAlivePods>
<alwaysPullImage>false</alwaysPullImage>
<instanceCap>2147483647</instanceCap>
<slaveConnectTimeout>100</slaveConnectTimeout>
<idleMinutes>0</idleMinutes>
<activeDeadlineSeconds>0</activeDeadlineSeconds>
<label>slave</label>
<serviceAccount>jenkins</serviceAccount>
<nodeSelector></nodeSelector>
<nodeUsageMode>NORMAL</nodeUsageMode>
<customWorkspaceVolumeEnabled>false</customWorkspaceVolumeEnabled>
<workspaceVolume class="org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume">
<memory>false</memory>
</workspaceVolume>
<volumes/>
<containers>
<org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
<name>jnlp</name>

<privileged>false</privileged>
<alwaysPullImage>true</alwaysPullImage>
<workingDir>/tmp</workingDir>
<command></command>
<args>${computer.jnlpmac} ${computer.name}</args>
<ttyEnabled>false</ttyEnabled>
<resourceRequestCpu></resourceRequestCpu>
<resourceRequestMemory></resourceRequestMemory>
<resourceLimitCpu></resourceLimitCpu>
<resourceLimitMemory></resourceLimitMemory>
<envVars/>
<ports/>
<livenessProbe>
<execArgs></execArgs>
<timeoutSeconds>0</timeoutSeconds>
<initialDelaySeconds>0</initialDelaySeconds>
<failureThreshold>0</failureThreshold>
<periodSeconds>0</periodSeconds>
<successThreshold>0</successThreshold>
</livenessProbe>
</org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
Both master jenkins container and slave run in the same name space.
I can log to the container and there is no problem with rights. I think is a silly mistake but I cannot find it on my own. Could you help me with this I'm tatally confused?
As my openshift mount volumes emptyDir() with option noexec I found a workaround for this problem in pipeline I'm changing workspace:
node('slave') {
ws('/tmp/test/' + env.JOB_NAME){
println('dupa')
sh 'git clone http://xxxx:8081/a/cm-devops-okd-example-python'
}

Test reports is missing in Jenkins

When my build failed - test report is missing in Jenkins. What do I do wrong?
This is part of my jenkinsfile:
node {
stage('Integration tests') {
git url: "https://$autotestsGitRepo", branch: 'develop',credentialsId: gitlabCredentialsId
sh 'chmod +x gradlew && ./gradlew clean test
step([$class: 'JUnitResultArchiver', testResults:'build/test-results/test/*.xml'])
}
}
This is my jenkins output:
5 tests completed, 5 failed
Task :test FAILED
FAILURE: Build failed with an exception.
What went wrong: Execution failed for task ':test'.
There were failing tests. See the report at: file:///var/jenkins_home/workspace/test2/build/reports/tests/test/index.html
Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
Get more help at https://help.gradle.org
BUILD FAILED in 1m 7s 4 actionable tasks: 4 executed [Pipeline] }
[Pipeline] // stage [Pipeline] } [Pipeline] // node [Pipeline] End of
Pipeline ERROR: script returned exit code 1 Finished: FAILURE

Jenkins: run gradle directly Jenkins pipeline script

There is a Job of type Pipeline on Jenkins.
Gradle plugin is installed.
I try to run gradle tasks directly but without success.
Current pipeline script:
node {
stage('Compile') {
// First variant
gradle {
tasks: 'clean'
tasks: 'compileJava'
}
// Second variant
gradle tasks: 'clean'
gradle tasks: 'compileJava'
// Third variant
gradle('clean')
gradle('compileJava')
}
This script doesn't fail but does just nothing.
Output to the console:
[Pipeline] stage
[Pipeline] { (Compile)
[Pipeline] }
[Pipeline] // stage
How can I run gradle directly from the pipeline script?

Resources