Jenkins Pipeline failed - jenkins

I am running a scripted version of the Jenkins pipeline. It ran through all the stages except the last stage. It failed every time.
Here is the Jenkins code that keep on failing:
stage('Deploy Production') {
echo "Deploy Prod on: ${env.BRANCH_NAME}"
try {
if (env.BRANCH_NAME == 'master'){
withCredentials([[$class: 'UsernamePasswordMultiBinding', credentialsId: 'XX',
usernameVariable: 'XXXXX', passwordVariable: 'XXXXXX']]) {
sh 'npm run build-prod-ci'
sh 'cf login -u ${XXXXX} -p ${XXXXXX} -a website.com -o XXX -s XXXX'
sh 'cf blue-green-deploy Dashboard'
sh 'cf delete Dashboard-old -f'
}
}
} finally {
deleteDir()
sh 'cf logout'
}
}
This is the printout and the error message. Any help in resolving this issue is appreciated.
Plugin blue-green-deploy 1.3.0 successfully installed.
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Deploy Development)
[Pipeline] echo
Deploy Dev on: master
[Pipeline] deleteDir
[Pipeline] sh
+ cf logout
Logging out ...
OK
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Deploy Production)
[Pipeline] echo
Deploy Prod on: master
[Pipeline] withCredentials
Masking supported pattern matches of $XXXXXX
[Pipeline] {
[Pipeline] sh
+ npm run build-prod-ci
npm ERR! code ENOENT
npm ERR! syscall open
npm ERR! path /v/wl/ws/Dashboard_master/package.json
npm ERR! errno -2
npm ERR! enoent ENOENT: no such file or directory, open '/v/wl/ws/Dashboard_master/package.json'
npm ERR! enoent This is related to npm not being able to find a file.
npm ERR! enoent
npm ERR! A complete log of this run can be found in:
npm ERR! /v/wl/wsp/Dashboard_master/.npm/_logs/2022-07-06T22_43_57_239Z-debug.log
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] deleteDir
[Pipeline] sh
+ cf logout
Logging out ...
OK
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 254
Previous stages with npm commands worked fine. Here is an example of a stage that worked:
stage('Run project tests and build') {
echo "Run project tests and build on: ${env.BRANCH_NAME}"
sh "npm run lint"
//sh "npm run compodoc-ci"
//sh "npm run test-coverage"
sh "npm run build-dev-ci"
}

This issue was self inflicted, and the comments that was given above gave me some hints to arrive at the solution.
The problem was that I was moving the original Jenkins code (I think it's called Declarative Pipeline) over to this form (scripted pipeline?), and I converted the code below without thinking because Jenkins is not something I work with everyday...
steps {
when {
branch 'dev'
}
}
post {
always {
deleteDir()
to
try {
if (env.BRANCH_NAME == 'dev'){
} finally {
deleteDir()
Anyway, the command deleteDir() delete the directory, and therefore it can't find that file anymore.
Thanks for the help guys. Much appreciated.
Lesson: check your code to understand what's it's doing. Copy and paste will only help to a certain extent.

Related

Jenkins throws error when running pipeline

I got a little problem with Jenkins Pipeline... To be more specific it seems like it always fails, but without any reasons, (probably there is one, but cannot find it)
Snippet of the pipeline.... (Whole Pipeline)
pipeline {
environment {
DOCKERHUB_CREDENTIALS=credentials("Dockerhub")
APPLICATION_HOST="0.0.0.0:8000"
APPLICATION_PORT="8000"
}
stage("build"){
steps{
script {
def inspectDockerNetwork = sh script: "docker network create global_store_network", returnStatus: true
if (inspectDockerNetwork == 0) {
sh "echo 'Creating Docker Network...'"
sh "docker network create global_store_network"
sh "echo 'Network Has been Created..'"
}
}
dir("test_env"){
sh "docker-compose up -d"
sleep 10
sh "echo 'Docker Built Image Successfully! Running Container....'"
}
}
}
stage("test"){
steps{
load "./test_env/version_env.groovy"
sh "echo 'Running Test Pipeline'"
sh "echo 'Running Healtcheck Test...'"
sh "echo 'Sleeping until the Application will be fully ready...'"
sleep 10
script {
command = """curl -s -X GET -H 'accept: */*' 'http://${env.APPLICATION_HOST}:${env.APPLICATION_PORT}/healthcheck/'"""
responseStatus = sh(script: command, returnStdout: true).trim()
if (responseStatus != "200") {
sh "echo 'Application Responded with Failure, Not Ready for Production...'"
error "Health Check Stage Failure."
}
}
}
post {
always {
dir("test_env"){
sh "echo 'Removing Testing Environment'"
sh "docker-compose down"
}
}
}
}. /////////// It has not start an Execution of the Deployment, So the Error Is Somewhere above ////////////////////////////////////
stage("deployment"){
steps {
load "./test_env/version_env.groovy"
sh "echo 'Running Deployment Pipeline Stage...'"
sh "echo 'Tagging new Image Version'"
withCredentials([usernamePassword(
credentialsId: "DockerHub", // Credential Id that should be created at Jenkins Server...
usernameVariable: env.DOCKERHUB_CREDENTIALS_USR, // Credential Username that should be created at jenkins Server.
passwordVariable: env.DOCKERHUB_CREDENTIALS_PSW, // Credential Password that shoud be created at Jenkins Server..
)]){
sh "docker login -u ${env.DOCKERHUB_CREDENTIALS_USR} -p ${env.DOCKERHUB_CREDENTIALS_PSW}"
sh "echo 'Logged In.. Into Docker.'"
sh "echo 'Tagging An Image'"
sh "docker tag new_versioned_image ${env.DOCKERHUB_REPOSITORY_LINK}:latest"
sh "echo 'Tagged... Pushing onto docker repo.'"
sh "docker push ${env.DOCKERHUB_REPOSITORY_LINK}:latest"
sh "echo 'Tagged Successfully.. Pushing Image On Docker Hub..'"
sh "echo 'Image has been Pushed Successfully! Pipeline Finished.'"
}
}
}
}
So the Output of that snippet is following...
( I've Separated Stage Logs In order to make it easier to read )
/////////// Build Stage ///////////////
First time build. Skipping changelog.
[Pipeline] }
[Pipeline] // stage
[Pipeline] withEnv
[Pipeline] {
[Pipeline] withCredentials
Masking supported pattern matches of $DOCKERHUB_CREDENTIALS or $DOCKERHUB_CREDENTIALS_PSW
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (build)
[Pipeline] dir
Running in /var/jenkins_home/workspace/Store Pipeline/test_env
[Pipeline] {
[Pipeline] sh
+ docker-compose up -d
Container test_postgres_store_database Creating
Container test_postgres_store_database Created
Container test_store_application_server Creating
Container test_store_application_server Created
Container test_postgres_store_database Starting
Container test_postgres_store_database Started
Container test_store_application_server Starting
Container test_store_application_server Started
[Pipeline] sleep
Sleeping for 10 sec
[Pipeline] sh
+ echo Docker Built Image Successfully! Running Container....
Docker Built Image Successfully! Running Container....
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (test)
[Pipeline] load
[Pipeline] { (./test_env/version_env.groovy)
[Pipeline] }
[Pipeline] // load
[Pipeline] sh
////////////// Testing Stage goes there /////////////////////
+ echo Running Test Pipeline
Running Test Pipeline
[Pipeline] sh
+ echo Running Healtcheck Test...
Running Healtcheck Test...
[Pipeline] sh
+ echo Sleeping until the Application will be fully ready...
Sleeping until the Application will be fully ready...
[Pipeline] sleep
Sleeping for 10 sec
[Pipeline] script
[Pipeline] {
[Pipeline] sh
+ curl -s -X GET -H accept: */* http://0.0.0.0:8000/healthcheck/
[Pipeline] }
[Pipeline] // script
Post stage
[Pipeline] dir
Running in /var/jenkins_home/workspace/Store Pipeline/test_env
[Pipeline] {
[Pipeline] sh
+ echo Removing Testing Environment
Removing Testing Environment
[Pipeline] sh
+ docker-compose down
Container test_store_application_server Stopping
Container test_store_application_server Stopping
Container test_store_application_server Stopped
Container test_store_application_server Removing
Container test_store_application_server Removed
Container test_postgres_store_database Stopping
Container test_postgres_store_database Stopping
Container test_postgres_store_database Stopped
Container test_postgres_store_database Removing
Container test_postgres_store_database Removed
///// Deployment Stage Goes there.... ////////////
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (deployment)
Stage "deployment" skipped due to earlier failure(s) ///// The Error Message Goes There....
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
//// ERROR /////
[Pipeline] End of Pipeline
ERROR: script returned exit code 7
Finished: FAILURE
The Problem is that it does not respond what's exactly the problem is, (just simply skipped due to earlier failure(s), above logs does not shows any of the errors.
So would really appreciate any help or any suggestions how to solve this Issue.
Thanks.

jenkins canot cannot pass parameters to the remote server's shell script with 'publish over ssh' plugin

I executed a shell script command from the remote server via Jenkins and wanted to pass a defined environment variable.
my test pipeline script as follow
#!/usr/bin/env groovy
pipeline {
agent any
environment {
param = "param1"
}
stages{
stage('deploy project') {
steps {
sshPublisher(publishers: [sshPublisherDesc(configName: '192.168.1.112-ssh',
transfers: [sshTransfer(cleanRemote: true,
execCommand: '''
/root/test.sh hello ${param} world ${JOB_NAME}
''',
execTimeout: 120000, patternSeparator: '[, ]+',remoteDirectory: '/',
removePrefix: '', sourceFiles: '')])])
}
}
}
}
The contents of the remote script 'test.sh' are as follows
#! /bin/sh
echo "$1"
echo "$2"
echo "$#"
then,The jenkins console builds are logged as follows
[Pipeline] }
[Pipeline] // stage
[Pipeline] withEnv
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (deploy project)
[Pipeline] step
SSH: Connecting from host [2a132d40d769]
SSH: Connecting with configuration [192.168.1.112-ssh] ...
SSH: EXEC: STDOUT/STDERR from command [
/root/test.sh hello ${param} world pipeline-docker-demo
] ...
hello
world
hello world pipeline-docker-demo
SSH: EXEC: completed after 207 ms
SSH: Disconnecting configuration [192.168.1.112-ssh] ...
SSH: Transferred 0 file(s)
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
You find that we can only pass one jenkins built-in variable with '${JOB_NAME}' and not the variable defined in this script with '${param}'
my app version:
jenkins:2.235.2
the plugin 'Publish Over SSH': '1.20.1'
What do I have to do to pass this custom variable,please help me,thanks a lot.

Jenkins Pipeline "yarn install" command not found

This is my first Jenkins script, it currently operates well on Linux but I migrate to MacOS (High Sierra) with the result of getting shell script error.
Node and yarn packages are installed on local Jenkins user. I can't figure out why this error just happens, could anyone give me a hand on this?
Here is my Jenkins file:
node {
stage('Check out') {
checkout scm
}
stage('Prepare') {
sh "yarn install"
}
stage('Test') {
sh "yarn test"
}
stage('Sonar') {
if (env.BRANCH_NAME == 'dev') {
def scannerHome = tool 'sonar scanner';
withSonarQubeEnv('sonar') {
sh "${scannerHome}/bin/sonar-scanner"
}
}
}
}
And full log:
14:43:11 Connecting to https://api.github.com using hariklee/******
Obtained Jenkinsfile from 6c639bd70ac86cbe6a49ac0b58bcc10e3c64a375
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] node
Running on Jenkins in
/Users/Shared/Jenkins/Home/workspace/wingman_423_ci_cd-7PSSGRAMBTXUQRESYCNVODXU7IZJLJLPHQOE3KYEPCSAAYAFFD4A
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Check out)
[Pipeline] checkout
git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
git config remote.origin.url https://github.com/wingman-xyz/app.git # timeout=10
Fetching without tags
Fetching upstream changes from https://github.com/wingman-xyz/app.git
git --version # timeout=10
using GIT_ASKPASS to set credentials
git fetch --no-tags --progress https://github.com/wingman-xyz/app.git +refs/heads/423_ci_cd:refs/remotes/origin/423_ci_cd
Checking out Revision 6c639bd70ac86cbe6a49ac0b58bcc10e3c64a375 (423_ci_cd)
git config core.sparsecheckout # timeout=10
git checkout -f 6c639bd70ac86cbe6a49ac0b58bcc10e3c64a375
Commit message: "jenkins test"
First time build. Skipping changelog.
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Prepare)
[Pipeline] sh
[wingman_423_ci_cd-7PSSGRAMBTXUQRESYCNVODXU7IZJLJLPHQOE3KYEPCSAAYAFFD4A] Running shell script
yarn install
/Users/Shared/Jenkins/Home/workspace/wingman_423_ci_cd-7PSSGRAMBTXUQRESYCNVODXU7IZJLJLPHQOE3KYEPCSAAYAFFD4A#tmp/durable-cf573520/script.sh: line 2: yarn: command not found
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
GitHub has been notified of this commit’s build result
ERROR: script returned exit code 127
Finished: FAILURE
There is no yarn command in your PATH variable.
Do npm install -g yarn before
stage('Prepare') {
sh "npm install -g yarn"
sh "yarn install"
}
If you get an error about not found npm command then you will have to add npm explicitly to your PATH using withEnv() {}
withEnv(['PATH+NODE=/something=/path/to/node/bin']) {
stage('Prepare') {
sh "npm install -g yarn"
sh "yarn install"
}
}

Jenkins, Host key verification failed, script returned exit code 255

I have a building-server where I have Jenkins 2.73.3 and another servers where I deploy my apps.
I have also set up a credential to connect from building-server to the other servers.
But everytime I add another server it is difficult to add it because I set up the authorized key in the new server and in the command line works, but not in Jenkins.
Here is a little recipe that fails:
pipeline {
agent any
stages {
stage('Set conditions') {
steps {
sshagent(['xxxx-xxxx-xxxx-xxxx-xxxx']) {
sh "ssh user#product.company.com 'echo $HOME'"
}
}
}
}
}
And here is the Log failure:
[ssh-agent] Started.
[Pipeline] {
[Pipeline] sh
[check] Running shell script
+ ssh user#product.company.com echo /var/lib/jenkins
$ ssh-agent -k
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 12567 killed;
[ssh-agent] Stopped.
Host key verification failed.
[Pipeline] }
[Pipeline] // sshagent
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 255
Finished: FAILURE
It seems that the solution was to add the parameter StrictHostKeyChecking to the shell script line
sh "ssh -o StrictHostKeyChecking=no user#product.company.com 'echo $HOME'"

How to configure Jenkins pipeline so that if there are multiple shell scripts and if one fails the jenkins jobs still runs instead of exiting

I want to configure a Jenkins pipeline job so that it should be able to run multiple shell script jobs. Even if one shell script fails the job should run the other two before failing the job.
You need to tweak your shell script, not Jenkins pipeline to achieve what you want!
Try this in your shell script
shell script command > /dev/null 2>&1 || true
so fail/pass it will execute and go to next shell script
You can always try catch the potentially failing sh execution
node {
sh "echo test"
try {
sh "/dev/null 2>&1"
} catch (error) {
echo "$error"
}
sh "echo test1"
}
Above runs successfully and produces
Started by user Blazej Checinski
[Pipeline] node
Running on agent2 in /home/build/workspace/test
[Pipeline] {
[Pipeline] sh
[test] Running shell script
+ echo test
test
[Pipeline] sh
[test] Running shell script
+ /dev/null
/home/build/workspace/test#tmp/durable-b4fc2854/script.sh: line 2: /dev/null: Permission denied
[Pipeline] echo
hudson.AbortException: script returned exit code 1
[Pipeline] sh
[test] Running shell script
+ echo test1
test1
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS

Resources