Test reports is missing in Jenkins - jenkins

When my build failed - test report is missing in Jenkins. What do I do wrong?
This is part of my jenkinsfile:
node {
stage('Integration tests') {
git url: "https://$autotestsGitRepo", branch: 'develop',credentialsId: gitlabCredentialsId
sh 'chmod +x gradlew && ./gradlew clean test
step([$class: 'JUnitResultArchiver', testResults:'build/test-results/test/*.xml'])
}
}
This is my jenkins output:
5 tests completed, 5 failed
Task :test FAILED
FAILURE: Build failed with an exception.
What went wrong: Execution failed for task ':test'.
There were failing tests. See the report at: file:///var/jenkins_home/workspace/test2/build/reports/tests/test/index.html
Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
Get more help at https://help.gradle.org
BUILD FAILED in 1m 7s 4 actionable tasks: 4 executed [Pipeline] }
[Pipeline] // stage [Pipeline] } [Pipeline] // node [Pipeline] End of
Pipeline ERROR: script returned exit code 1 Finished: FAILURE

Related

Why does my Jenkins pipeline fail with Error code 24

I'm running a basic pipeline that executes pylint on my repository code.
My Jenkins runs on Debian etch, the Jenkins version is 2.231.
At the end of the pipeline, I'm getting the following error :
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 24
Finished: FAILURE
As this page https://wiki.jenkins.io/display/JENKINS/Job+Exit+Status explains, the error code 24 refers to a "too many open files" error.
If I remove the pylint part of the job, the pipelines executes smoothly.
Il tried to configure the limit in /etc/security/limits.conf
jenkins soft nofile 64000
jenkins hard nofile 64000
and in the Jenkins config file /etc/default/jenkins :
MAXOPENFILES=64000
If I put "ulimit -n" in the pipeline, the configured value is displayed but it has no effect on the result that remains : ERROR: script returned exit code 24.
The problem comes from pylint that doesn't return a 0 code even if it ran successfully.
The solution is to use the --exit-zero option when running pylint.
pylint --exit-zero src
Jenkins with pylint gives build failure

jenkins pipeline error during build image with build war in openshift

We are implementing CICD pipeline in openshift 3.9. At a stage where I'm building a jboss image with war, I am getting an error. The pipeline is getting failed at the BUILD IMAGE WITH APP stage. It is getting successful runs sometimes and failures sometimes.
Below is the piece of code in jenkins.
stage('Build Image') {
openshift.withCluster() {
openshift.withProject(env.DEV_PROJECT) {
def bcSelector = openshift.selector("bc", "jboss")
def bcExists = bcSelector.exists()
if (!bcExists) {
openshift.newBuild("--name=jboss", "--image-stream=jboss-eap70-openshift:1.5", "--binary=true")
} else {
echo "The specified image already exists"
}
}}
}
stage('Build Image with app') {
sh "rm -rf oc-build && mkdir -p oc-build/deployments"
sh "cp /var/lib/jenkins/jobs/cicd/jobs/cicd-tasks-pipeline/workspace/target/hello-1.0.war oc-build/deployments/ROOT.war"
openshift.withCluster() {
openshift.withProject(env.DEV_PROJECT) {
openshift.selector("bc", "jboss").startBuild("--from-dir=oc-build", "--wait=true")
}
}
}
In the stage BUILD IMAGE, it is takingthe jboss image and has no problem in this stage. In the stage BUILD IMAGE WITH APP, jboss is bound up with the war build. Below is the jenkins output during pipeline build.
[workspace] Running shell script
+ rm -rf oc-build
+ mkdir -p oc-build/deployments
[Pipeline] sh
[workspace] Running shell script
+ cp /var/lib/jenkins/jobs/cicd/jobs/cicd-tasks-pipeline/workspace/target/hello-1.0.war oc-build/deployments/ROOT.war
[Pipeline] _OcContextInit
[Pipeline] _OcContextInit
[Pipeline] readFile
[Pipeline] _OcAction
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: Error running start-build on at least one item: [buildconfig/jboss];
{reference={}, err=Uploading directory "oc-build" as binary input for the build ...
Error from server (BadRequest): build jboss-2 encountered an error: No logs are available., verb=start-build, cmd=oc --server=https://172.30.0.1:443 --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt --namespace=cicd --token=XXXXX start-build buildconfig/jboss --from-dir=oc-build --wait=true -o=name , out=, status=1}
Finished: FAILURE
Could you please let us know why this error is happening frequently and why 5 builds 2 are failing with this error?

sfdx error while running in Jenkins pipeline

Can anyone help me this issue in sfdx. I was able to execute scratch org and package version creation using Jenkinsfile. But getting an error while installing metadata on scratch org and sandbox.
[Pipeline] sh
[SFDX_SFDX_Package_Installation] Running shell script
+ sfdx force:package:install --wait 10 --publishwait 10 --package sfdx_naresh#0.1.0-1 -k nar123 -r -u scratchorg#so7.org
autoupdate:: /root/.cache/sfdx/update.lock is locked with an active
writer
ERROR: INVALID_HEADER_TYPE.
[Pipeline] error
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: Package Installing Failed
Finished: FAILURE

OKD 3.9 jenkins slave Permission denied

I have strange problem when I run a build in container in openshift/OKD i get a strange problem:
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] node
Still waiting to schedule task
‘Jenkins’ doesn’t have label ‘slave’
Agent slave-8n2r5 is provisioned from template Kubernetes Pod Template
Agent specification [Kubernetes Pod Template] (slave):
* [jnlp] docker-registry.default.svc:5000/openshift/jenkins-slave-base- centos7:v3.9
Running on slave-8n2r5 in /tmp/workspace/test_job
[Pipeline] {
[Pipeline] stage (hello)
Using the ‘stage’ step without a block argument is deprecated
Entering stage hello
Proceeding
[Pipeline] echo
dupa
[Pipeline] sh
[test_job] Running shell script
sh: /tmp/workspace/test_job#tmp/durable-bda908b8/script.sh: Permission denied
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 126
Finished: FAILURE
Pipeline:
node('slave') {
stage 'hello'
println('dupa')
sh 'git clone http://pac-app-test-01.raiffeisen.pl:8081/a/cm-devops-okd-example-python'
}
Slave container config:
<org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
<inheritFrom></inheritFrom>
<name>slave</name>
<namespace></namespace>
<privileged>false</privileged>
<capOnlyOnAlivePods>false</capOnlyOnAlivePods>
<alwaysPullImage>false</alwaysPullImage>
<instanceCap>2147483647</instanceCap>
<slaveConnectTimeout>100</slaveConnectTimeout>
<idleMinutes>0</idleMinutes>
<activeDeadlineSeconds>0</activeDeadlineSeconds>
<label>slave</label>
<serviceAccount>jenkins</serviceAccount>
<nodeSelector></nodeSelector>
<nodeUsageMode>NORMAL</nodeUsageMode>
<customWorkspaceVolumeEnabled>false</customWorkspaceVolumeEnabled>
<workspaceVolume class="org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume">
<memory>false</memory>
</workspaceVolume>
<volumes/>
<containers>
<org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
<name>jnlp</name>

<privileged>false</privileged>
<alwaysPullImage>true</alwaysPullImage>
<workingDir>/tmp</workingDir>
<command></command>
<args>${computer.jnlpmac} ${computer.name}</args>
<ttyEnabled>false</ttyEnabled>
<resourceRequestCpu></resourceRequestCpu>
<resourceRequestMemory></resourceRequestMemory>
<resourceLimitCpu></resourceLimitCpu>
<resourceLimitMemory></resourceLimitMemory>
<envVars/>
<ports/>
<livenessProbe>
<execArgs></execArgs>
<timeoutSeconds>0</timeoutSeconds>
<initialDelaySeconds>0</initialDelaySeconds>
<failureThreshold>0</failureThreshold>
<periodSeconds>0</periodSeconds>
<successThreshold>0</successThreshold>
</livenessProbe>
</org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
Both master jenkins container and slave run in the same name space.
I can log to the container and there is no problem with rights. I think is a silly mistake but I cannot find it on my own. Could you help me with this I'm tatally confused?
As my openshift mount volumes emptyDir() with option noexec I found a workaround for this problem in pipeline I'm changing workspace:
node('slave') {
ws('/tmp/test/' + env.JOB_NAME){
println('dupa')
sh 'git clone http://xxxx:8081/a/cm-devops-okd-example-python'
}

Suppress Pipeline Output in Jenkins file

When I run a build on my jenkins it always prints the pipeline while moving through the jenkins steps. The current output of the console:
Build context: CI
:clean
:app:clean
BUILD SUCCESSFUL in 22s
2 actionable tasks: 2 executed
[Pipeline] script
[Pipeline] {
[Pipeline] properties
[Pipeline] sh
I want to see only the actual executions I've declared. The console ouput should look like
Build context: CI
:clean
:app:clean
BUILD SUCCESSFUL in 22s
2 actionable tasks: 2 executed

Resources