I have strange problem when I run a build in container in openshift/OKD i get a strange problem:
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] node
Still waiting to schedule task
‘Jenkins’ doesn’t have label ‘slave’
Agent slave-8n2r5 is provisioned from template Kubernetes Pod Template
Agent specification [Kubernetes Pod Template] (slave):
* [jnlp] docker-registry.default.svc:5000/openshift/jenkins-slave-base- centos7:v3.9
Running on slave-8n2r5 in /tmp/workspace/test_job
[Pipeline] {
[Pipeline] stage (hello)
Using the ‘stage’ step without a block argument is deprecated
Entering stage hello
Proceeding
[Pipeline] echo
dupa
[Pipeline] sh
[test_job] Running shell script
sh: /tmp/workspace/test_job#tmp/durable-bda908b8/script.sh: Permission denied
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 126
Finished: FAILURE
Pipeline:
node('slave') {
stage 'hello'
println('dupa')
sh 'git clone http://pac-app-test-01.raiffeisen.pl:8081/a/cm-devops-okd-example-python'
}
Slave container config:
<org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
<inheritFrom></inheritFrom>
<name>slave</name>
<namespace></namespace>
<privileged>false</privileged>
<capOnlyOnAlivePods>false</capOnlyOnAlivePods>
<alwaysPullImage>false</alwaysPullImage>
<instanceCap>2147483647</instanceCap>
<slaveConnectTimeout>100</slaveConnectTimeout>
<idleMinutes>0</idleMinutes>
<activeDeadlineSeconds>0</activeDeadlineSeconds>
<label>slave</label>
<serviceAccount>jenkins</serviceAccount>
<nodeSelector></nodeSelector>
<nodeUsageMode>NORMAL</nodeUsageMode>
<customWorkspaceVolumeEnabled>false</customWorkspaceVolumeEnabled>
<workspaceVolume class="org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume">
<memory>false</memory>
</workspaceVolume>
<volumes/>
<containers>
<org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
<name>jnlp</name>

<privileged>false</privileged>
<alwaysPullImage>true</alwaysPullImage>
<workingDir>/tmp</workingDir>
<command></command>
<args>${computer.jnlpmac} ${computer.name}</args>
<ttyEnabled>false</ttyEnabled>
<resourceRequestCpu></resourceRequestCpu>
<resourceRequestMemory></resourceRequestMemory>
<resourceLimitCpu></resourceLimitCpu>
<resourceLimitMemory></resourceLimitMemory>
<envVars/>
<ports/>
<livenessProbe>
<execArgs></execArgs>
<timeoutSeconds>0</timeoutSeconds>
<initialDelaySeconds>0</initialDelaySeconds>
<failureThreshold>0</failureThreshold>
<periodSeconds>0</periodSeconds>
<successThreshold>0</successThreshold>
</livenessProbe>
</org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
Both master jenkins container and slave run in the same name space.
I can log to the container and there is no problem with rights. I think is a silly mistake but I cannot find it on my own. Could you help me with this I'm tatally confused?
As my openshift mount volumes emptyDir() with option noexec I found a workaround for this problem in pipeline I'm changing workspace:
node('slave') {
ws('/tmp/test/' + env.JOB_NAME){
println('dupa')
sh 'git clone http://xxxx:8081/a/cm-devops-okd-example-python'
}
Related
I'm running a basic pipeline that executes pylint on my repository code.
My Jenkins runs on Debian etch, the Jenkins version is 2.231.
At the end of the pipeline, I'm getting the following error :
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 24
Finished: FAILURE
As this page https://wiki.jenkins.io/display/JENKINS/Job+Exit+Status explains, the error code 24 refers to a "too many open files" error.
If I remove the pylint part of the job, the pipelines executes smoothly.
Il tried to configure the limit in /etc/security/limits.conf
jenkins soft nofile 64000
jenkins hard nofile 64000
and in the Jenkins config file /etc/default/jenkins :
MAXOPENFILES=64000
If I put "ulimit -n" in the pipeline, the configured value is displayed but it has no effect on the result that remains : ERROR: script returned exit code 24.
The problem comes from pylint that doesn't return a 0 code even if it ran successfully.
The solution is to use the --exit-zero option when running pylint.
pylint --exit-zero src
Jenkins with pylint gives build failure
I'm trying to set up a Jenkins pipeline (using the declarative syntax) that runs unit and feature tests on two separate, on-demand AWS EC2 instances. The pipeline works perfectly when run on a single instance and without the parallel stages. As soon as I switch to parallel stages, any shell script fails with this cryptic message:
process apparently never started in
/home/admin/workspace/GSWebRuby_Test#tmp/durable-b0d8c4b4 (running
Jenkins temporarily with
-Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true
might make the problem clearer)
I've googled extensively and came across several bug reports of the Durable Task plugin that appears to be responsible for this message. I'm using the latest version of the plugin v. 1.33 and none of the problems seem to apply to my case, e.g. failures on unusual architectures or when running Docker containers. I've also down- and re-upgaded the plugin (toggling between versions 1.30 and 1.33). Also, to re-iterate, sh commands work without issue when I don't use the parallel stages.
I've created a simplified pipeline to debug the problem. Note that the shell commands are also simple, e.g. "env | sort", or "pwd".
pipeline {
agent none
environment {
DB_USER = credentials('db-user')
DB_PASS = credentials('db-pass')
}
stages {
stage('Setup'){
failFast false
parallel {
stage('foo') {
agent {
label 'jenkins-slave-ondemand'
}
steps {
echo 'In stage foo'
sh 'env|sort'
}
}
stage('bar') {
agent {
label 'jenkins-slave-ondemand'
}
steps {
echo 'In stage bar'
sh 'pwd'
}
}
}
}
}
}
This is the console output:
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] withCredentials
Masking supported pattern matches of $DB_PASS or $DB_USER
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Setup)
[Pipeline] parallel
[Pipeline] { (Branch: foo)
[Pipeline] { (Branch: bar)
[Pipeline] stage
[Pipeline] { (foo)
[Pipeline] stage
[Pipeline] { (bar)
[Pipeline] node
[Pipeline] node
Still waiting to schedule task
All nodes of label ‘jenkins-slave-ondemand’ are offline
Still waiting to schedule task
All nodes of label ‘jenkins-slave-ondemand’ are offline
Running on EC2 (Jenkins AWS EC2) - Jenkins slave (i-0982299c572100c71) in /home/admin/workspace/GSWebRuby_Test
[Pipeline] {
[Pipeline] echo
In stage foo
[Pipeline] sh
Running on EC2 (Jenkins AWS EC2) - Jenkins slave (i-092ecac8e6c257270) in /home/admin/workspace/GSWebRuby_Test
[Pipeline] {
[Pipeline] echo
In stage bar
[Pipeline] sh
process apparently never started in /home/admin/workspace/GSWebRuby_Test#tmp/durable-b0d8c4b4
(running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch foo
process apparently never started in /home/admin/workspace/GSWebRuby_Test#tmp/durable-b6cfcff9
(running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch bar
[Pipeline] // parallel
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] End of Pipeline
ERROR: script returned exit code -2
Finished: FAILURE
Am I doing something wrong in the way I've set up the pipeline? Any pointers would be greatly appreciated.
Edit:
After setting this JENKINS_JAVA_OPTIONS org.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true, I see this additional output:
In stage bar
[Pipeline] sh
nohup: failed to run command 'sh': No such file or directory
process apparently never started in /home/admin/workspace/GSWebRuby_Test#tmp/durable-099a2e56
I have a Jenkins pipeline that periodically pull from gitlab and build different repos, build a multi-component platform, run and test it. Now I installed a sonarqube server on the same machine (Ubuntu 18.04) and I want to connect my Jenkins to sonarqube.
In Jenkins:
I set up the sonarqube scanner at Global Tool Configuration as below:
I generated a token in sonarqube and in Jenkins at configuration I set up the server as below BUT I couldn't find any place to insert the token (and I think this is the problem):
In the jenkins pipeline this is how I added a stage for sonarqube:
stage('SonarQube analysis') {
steps{
script {
scannerHome = tool 'SonarQube';
}
withSonarQubeEnv('SonarQube') {
sh "${scannerHome}/bin/sonar-scanner"
}
}
}
But this fails with below logs and ERROR: script returned exit code 127:
[Pipeline] { (SonarQube analysis)
[Pipeline] script
[Pipeline] {
[Pipeline] tool
Invalid tool ID
[Pipeline] }
[Pipeline] // script
[Pipeline] withSonarQubeEnv
Injecting SonarQube environment variables using the configuration: SonarQube
[Pipeline] {
[Pipeline] sh
+ /var/lib/jenkins/tools/hudson.plugins.sonar.SonarRunnerInstallation/SonarQube/bin/sonar-scanner
/var/lib/jenkins/workspace/wws-full-test#tmp/durable-2c68acd1/script.sh: 1: /var/lib/jenkins/workspace/wws-full-test#tmp/durable-2c68acd1/script.sh: /var/lib/jenkins/tools/hudson.plugins.sonar.SonarRunnerInstallation/SonarQube/bin/sonar-scanner: not found
[Pipeline] }
WARN: Unable to locate 'report-task.txt' in the workspace. Did the SonarScanner succeeded?
[Pipeline] // withSonarQubeEnv
[Pipeline] }
[Pipeline] // stage
And when I check my jenkinstools on the disk sonnar plugin is not there:
$ ls /var/lib/jenkins/tools/
jenkins.plugins.nodejs.tools.NodeJSInstallation
Can someone please let me know how I can connect Jenkins to sonarqube?
Create and add token to be able to connect to SonarQube.
You have create project in SonarQube and use it as a parameter:
sh """
${scannerHome}/bin/sonar-scanner \
-Dsonar.projectKey=your_project_key_created_in_sonarqube_as_project \
-Dsonar.sources=. \
"""
We are implementing CICD pipeline in openshift 3.9. At a stage where I'm building a jboss image with war, I am getting an error. The pipeline is getting failed at the BUILD IMAGE WITH APP stage. It is getting successful runs sometimes and failures sometimes.
Below is the piece of code in jenkins.
stage('Build Image') {
openshift.withCluster() {
openshift.withProject(env.DEV_PROJECT) {
def bcSelector = openshift.selector("bc", "jboss")
def bcExists = bcSelector.exists()
if (!bcExists) {
openshift.newBuild("--name=jboss", "--image-stream=jboss-eap70-openshift:1.5", "--binary=true")
} else {
echo "The specified image already exists"
}
}}
}
stage('Build Image with app') {
sh "rm -rf oc-build && mkdir -p oc-build/deployments"
sh "cp /var/lib/jenkins/jobs/cicd/jobs/cicd-tasks-pipeline/workspace/target/hello-1.0.war oc-build/deployments/ROOT.war"
openshift.withCluster() {
openshift.withProject(env.DEV_PROJECT) {
openshift.selector("bc", "jboss").startBuild("--from-dir=oc-build", "--wait=true")
}
}
}
In the stage BUILD IMAGE, it is takingthe jboss image and has no problem in this stage. In the stage BUILD IMAGE WITH APP, jboss is bound up with the war build. Below is the jenkins output during pipeline build.
[workspace] Running shell script
+ rm -rf oc-build
+ mkdir -p oc-build/deployments
[Pipeline] sh
[workspace] Running shell script
+ cp /var/lib/jenkins/jobs/cicd/jobs/cicd-tasks-pipeline/workspace/target/hello-1.0.war oc-build/deployments/ROOT.war
[Pipeline] _OcContextInit
[Pipeline] _OcContextInit
[Pipeline] readFile
[Pipeline] _OcAction
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: Error running start-build on at least one item: [buildconfig/jboss];
{reference={}, err=Uploading directory "oc-build" as binary input for the build ...
Error from server (BadRequest): build jboss-2 encountered an error: No logs are available., verb=start-build, cmd=oc --server=https://172.30.0.1:443 --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt --namespace=cicd --token=XXXXX start-build buildconfig/jboss --from-dir=oc-build --wait=true -o=name , out=, status=1}
Finished: FAILURE
Could you please let us know why this error is happening frequently and why 5 builds 2 are failing with this error?
Can anyone help me this issue in sfdx. I was able to execute scratch org and package version creation using Jenkinsfile. But getting an error while installing metadata on scratch org and sandbox.
[Pipeline] sh
[SFDX_SFDX_Package_Installation] Running shell script
+ sfdx force:package:install --wait 10 --publishwait 10 --package sfdx_naresh#0.1.0-1 -k nar123 -r -u scratchorg#so7.org
autoupdate:: /root/.cache/sfdx/update.lock is locked with an active
writer
ERROR: INVALID_HEADER_TYPE.
[Pipeline] error
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: Package Installing Failed
Finished: FAILURE