jenkins pipeline error during build image with build war in openshift - jenkins

We are implementing CICD pipeline in openshift 3.9. At a stage where I'm building a jboss image with war, I am getting an error. The pipeline is getting failed at the BUILD IMAGE WITH APP stage. It is getting successful runs sometimes and failures sometimes.
Below is the piece of code in jenkins.
stage('Build Image') {
openshift.withCluster() {
openshift.withProject(env.DEV_PROJECT) {
def bcSelector = openshift.selector("bc", "jboss")
def bcExists = bcSelector.exists()
if (!bcExists) {
openshift.newBuild("--name=jboss", "--image-stream=jboss-eap70-openshift:1.5", "--binary=true")
} else {
echo "The specified image already exists"
}
}}
}
stage('Build Image with app') {
sh "rm -rf oc-build && mkdir -p oc-build/deployments"
sh "cp /var/lib/jenkins/jobs/cicd/jobs/cicd-tasks-pipeline/workspace/target/hello-1.0.war oc-build/deployments/ROOT.war"
openshift.withCluster() {
openshift.withProject(env.DEV_PROJECT) {
openshift.selector("bc", "jboss").startBuild("--from-dir=oc-build", "--wait=true")
}
}
}
In the stage BUILD IMAGE, it is takingthe jboss image and has no problem in this stage. In the stage BUILD IMAGE WITH APP, jboss is bound up with the war build. Below is the jenkins output during pipeline build.
[workspace] Running shell script
+ rm -rf oc-build
+ mkdir -p oc-build/deployments
[Pipeline] sh
[workspace] Running shell script
+ cp /var/lib/jenkins/jobs/cicd/jobs/cicd-tasks-pipeline/workspace/target/hello-1.0.war oc-build/deployments/ROOT.war
[Pipeline] _OcContextInit
[Pipeline] _OcContextInit
[Pipeline] readFile
[Pipeline] _OcAction
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: Error running start-build on at least one item: [buildconfig/jboss];
{reference={}, err=Uploading directory "oc-build" as binary input for the build ...
Error from server (BadRequest): build jboss-2 encountered an error: No logs are available., verb=start-build, cmd=oc --server=https://172.30.0.1:443 --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt --namespace=cicd --token=XXXXX start-build buildconfig/jboss --from-dir=oc-build --wait=true -o=name , out=, status=1}
Finished: FAILURE
Could you please let us know why this error is happening frequently and why 5 builds 2 are failing with this error?

Related

Verify the Software via Jenkins Pipeline

stage('Verfiy Software') {
dir('sudo /opt/tomcat/apache/bin/') {
sh './version.sh'
I am running tomcat in Ubuntu box, and its installed following path location /opt/tomcat/apache/.
I have created pipeline script to verify tomcat version and i am referring bin location to get the tomcat version, but unfortunately i am getting below error message.
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Verfiy Software)
[Pipeline] dir
Running in /var/lib/jenkins/workspace/project/sudo /opt/tomcat/apache/bin/version.sh
[Pipeline] {
[Pipeline] sh
+ ./version.sh
/var/lib/jenkins/workspace/project/sudo /opt/tomcat/apache/bin/version.sh#tmp/durable-5c73b35b/script.sh: 1: /var/lib/jenkins/workspace/project/sudo /opt/tomcat/apache/bin/version.sh#tmp/durable-5c73b35b/script.sh: ./version.sh: not found
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 127
Finished: FAILURE
can you please help me on this.
You need sudo only to run version.sh.
stage('Verfiy Software') {
sh 'sudo /opt/tomcat/apache/bin/version.sh'

Problem with Connecting Jenkins to SonarQube

I have a Jenkins pipeline that periodically pull from gitlab and build different repos, build a multi-component platform, run and test it. Now I installed a sonarqube server on the same machine (Ubuntu 18.04) and I want to connect my Jenkins to sonarqube.
In Jenkins:
I set up the sonarqube scanner at Global Tool Configuration as below:
I generated a token in sonarqube and in Jenkins at configuration I set up the server as below BUT I couldn't find any place to insert the token (and I think this is the problem):
In the jenkins pipeline this is how I added a stage for sonarqube:
stage('SonarQube analysis') {
steps{
script {
scannerHome = tool 'SonarQube';
}
withSonarQubeEnv('SonarQube') {
sh "${scannerHome}/bin/sonar-scanner"
}
}
}
But this fails with below logs and ERROR: script returned exit code 127:
[Pipeline] { (SonarQube analysis)
[Pipeline] script
[Pipeline] {
[Pipeline] tool
Invalid tool ID
[Pipeline] }
[Pipeline] // script
[Pipeline] withSonarQubeEnv
Injecting SonarQube environment variables using the configuration: SonarQube
[Pipeline] {
[Pipeline] sh
+ /var/lib/jenkins/tools/hudson.plugins.sonar.SonarRunnerInstallation/SonarQube/bin/sonar-scanner
/var/lib/jenkins/workspace/wws-full-test#tmp/durable-2c68acd1/script.sh: 1: /var/lib/jenkins/workspace/wws-full-test#tmp/durable-2c68acd1/script.sh: /var/lib/jenkins/tools/hudson.plugins.sonar.SonarRunnerInstallation/SonarQube/bin/sonar-scanner: not found
[Pipeline] }
WARN: Unable to locate 'report-task.txt' in the workspace. Did the SonarScanner succeeded?
[Pipeline] // withSonarQubeEnv
[Pipeline] }
[Pipeline] // stage
And when I check my jenkinstools on the disk sonnar plugin is not there:
$ ls /var/lib/jenkins/tools/
jenkins.plugins.nodejs.tools.NodeJSInstallation
Can someone please let me know how I can connect Jenkins to sonarqube?
Create and add token to be able to connect to SonarQube.
You have create project in SonarQube and use it as a parameter:
sh """
${scannerHome}/bin/sonar-scanner \
-Dsonar.projectKey=your_project_key_created_in_sonarqube_as_project \
-Dsonar.sources=. \
"""

jenkins pipeline not build remotely

I am trying to build a job by pipeline to my other slave in the master
the pipeline is like this
pipeline {
agent {
label "virtual"
}
stages {
stage("test one") {
steps {
echo " test test test"
}
}
stage("test two") {
steps {
echo " testttttttttt "
}
}
}
}
they syntax not getting the error but it doesn't build on my slave server,
but when I run on freestyle job by Restrict where this project can be run with that label then execute sheel by echo "test test"
it was built on my slave server,
what is wrong with my pipeline ? do I missing something?
after build
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] node
Running on virtual in /home/virtual/jenkins/workspace/demoo
[Pipeline] {
[Pipeline] stage
[Pipeline] { (test one)
[Pipeline] echo
test test test
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (test two)
[Pipeline] echo
testttttttttt
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Add the path you want in the Remote root directory (yellow column) as shown below:-
The build works like you did it already. The steps will be executed on the slave. If you add something like clone a repository to your step, your workspace directory will be created.
Pipeline and Freestylejobs are working here different. The Freestylejob will make the directory in workspace as soon as it runs at the first time. The Pipelinejob will create the directory as soon as it needs this this directory.
I created a simple Pipeline:
pipeline {
agent {
label "linux"
}
stages {
stage("test one") {
steps {
sh "echo 'test test test' > text.txt"
}
}
}
}
I converted your echo to a sh command because my Slave is a linux slave. The sh step creates a text.txt file. As soon as I run this job, the directory will be created:
[<user>#<server> test-pipeline]$ pwd
/var/lib/jenkins/workspace/test-pipeline
[<user>#<server> test-pipeline]$ ls -l
total 4
-rw-r----- 1 <user> <group> 15 Oct 7 16:49 text.txt

sfdx error while running in Jenkins pipeline

Can anyone help me this issue in sfdx. I was able to execute scratch org and package version creation using Jenkinsfile. But getting an error while installing metadata on scratch org and sandbox.
[Pipeline] sh
[SFDX_SFDX_Package_Installation] Running shell script
+ sfdx force:package:install --wait 10 --publishwait 10 --package sfdx_naresh#0.1.0-1 -k nar123 -r -u scratchorg#so7.org
autoupdate:: /root/.cache/sfdx/update.lock is locked with an active
writer
ERROR: INVALID_HEADER_TYPE.
[Pipeline] error
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: Package Installing Failed
Finished: FAILURE

OKD 3.9 jenkins slave Permission denied

I have strange problem when I run a build in container in openshift/OKD i get a strange problem:
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] node
Still waiting to schedule task
‘Jenkins’ doesn’t have label ‘slave’
Agent slave-8n2r5 is provisioned from template Kubernetes Pod Template
Agent specification [Kubernetes Pod Template] (slave):
* [jnlp] docker-registry.default.svc:5000/openshift/jenkins-slave-base- centos7:v3.9
Running on slave-8n2r5 in /tmp/workspace/test_job
[Pipeline] {
[Pipeline] stage (hello)
Using the ‘stage’ step without a block argument is deprecated
Entering stage hello
Proceeding
[Pipeline] echo
dupa
[Pipeline] sh
[test_job] Running shell script
sh: /tmp/workspace/test_job#tmp/durable-bda908b8/script.sh: Permission denied
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 126
Finished: FAILURE
Pipeline:
node('slave') {
stage 'hello'
println('dupa')
sh 'git clone http://pac-app-test-01.raiffeisen.pl:8081/a/cm-devops-okd-example-python'
}
Slave container config:
<org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
<inheritFrom></inheritFrom>
<name>slave</name>
<namespace></namespace>
<privileged>false</privileged>
<capOnlyOnAlivePods>false</capOnlyOnAlivePods>
<alwaysPullImage>false</alwaysPullImage>
<instanceCap>2147483647</instanceCap>
<slaveConnectTimeout>100</slaveConnectTimeout>
<idleMinutes>0</idleMinutes>
<activeDeadlineSeconds>0</activeDeadlineSeconds>
<label>slave</label>
<serviceAccount>jenkins</serviceAccount>
<nodeSelector></nodeSelector>
<nodeUsageMode>NORMAL</nodeUsageMode>
<customWorkspaceVolumeEnabled>false</customWorkspaceVolumeEnabled>
<workspaceVolume class="org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume">
<memory>false</memory>
</workspaceVolume>
<volumes/>
<containers>
<org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
<name>jnlp</name>

<privileged>false</privileged>
<alwaysPullImage>true</alwaysPullImage>
<workingDir>/tmp</workingDir>
<command></command>
<args>${computer.jnlpmac} ${computer.name}</args>
<ttyEnabled>false</ttyEnabled>
<resourceRequestCpu></resourceRequestCpu>
<resourceRequestMemory></resourceRequestMemory>
<resourceLimitCpu></resourceLimitCpu>
<resourceLimitMemory></resourceLimitMemory>
<envVars/>
<ports/>
<livenessProbe>
<execArgs></execArgs>
<timeoutSeconds>0</timeoutSeconds>
<initialDelaySeconds>0</initialDelaySeconds>
<failureThreshold>0</failureThreshold>
<periodSeconds>0</periodSeconds>
<successThreshold>0</successThreshold>
</livenessProbe>
</org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
Both master jenkins container and slave run in the same name space.
I can log to the container and there is no problem with rights. I think is a silly mistake but I cannot find it on my own. Could you help me with this I'm tatally confused?
As my openshift mount volumes emptyDir() with option noexec I found a workaround for this problem in pipeline I'm changing workspace:
node('slave') {
ws('/tmp/test/' + env.JOB_NAME){
println('dupa')
sh 'git clone http://xxxx:8081/a/cm-devops-okd-example-python'
}

Resources