Execution a deployment script on a remote ssh server through a Jenkins pipeline - jenkins

I've got a Jenkins pipeline containing stages for source loading, building and deploying on a remote machine through SSH. The problem is about the last one. I saved the script of the following template on the remote server:
#!/bin/bash
bash /<pathTo>/jboss-cli.sh --command="deploy /<anotherPath>/service.war --force"
It works fine if executed in a terminal connected to the remote server.
The best outcome I've received through Jenkins is
/<pathTo>/jboss-cli.sh: line 87: usr/bin/java/bin/java: No such file or directory
in Jenkins console output.
Tried switching between bash and sh, exporting path to java in the pipeline script etc.
Any suggestions are appreciated.
Thanks!
p.s. Execution call from Jenkins looks like:
sh """
ssh -o StrictHostKeyChecking=no $connectionName 'bash /<pathToTheScript>/<scriptName>.sh'
"""

line 87: **usr/bin/java/bin/java**: No such file or directory
As per error line it is considering path from usr not /usr. Can you check if this is what the problem is?
Sorry, I know this should be in comments section but I don't have right to add comments yet.

Related

How to execute shell commands in jenkins pipeline script on windows machine

node{
def app
stage ("Build Image"){
bat 'cd C:/Users/trivedi2/Desktop/DEV_pipeline/DEV_Workspace'
app = docker.build("CDashboard")
}
}
This is my pipeline code for creating docker images
error while running jenkins job:nohup: failed to run command 'sh': No such file or directory
Can any one help me with this issue. I am using windows machine
First set the env PATH variable in the machine which points out to a sh.exe in Git->bin
Second Try to do a sysmlink to nohup.exe as the error susggest
mklink "C:\Program Files\Git\bin\nohup.exe" "C:\Program Files\git\usr\bin\nohup.exe"
After this setup you can use node{sh "git --version" in your jenkinsfile and it works fine.
https://stackoverflow.com/a/45151156/3648023

Jenkins build logs in slave nodes

I am trying to push our jenkins build logs to S3.
I used Groovy plugin and the following script in Build phase
// This script should be run in a system groovy script build step.
// The FilePath class understands what node a path is on, not just the path.
import hudson.FilePath
// Get path to console log file on master.
logFile = build.getLogFile()
// Turn this into a FilePath object.
logFilePathOnMaster = new FilePath(logFile)
logFileName = build.envVars["JOB_BASE_NAME"] + build.envVars["RT_RELEASE_STAGING_VERSION"] + '.txt'
// Create remote file path obj to build agent.
remoteLogFile = new FilePath(build.workspace, logFileName)
// Copy contents of master's console log to file on build agent.
remoteLogFile.copyFrom(logFilePathOnMaster)
And then I am using S3 plugin to push .txt files to S3.
But this script fetches the build log file from the master node.
How are the build logs transferred from slave to master node ?
Can I access the build log file on my slave node without master's involvement at all ?
The slave node must be preserving the build logs while building somewhere ? I cant seem to find it.
I am not much familiar with Groovy but here is the solution which worked for me using shell script.
I am using Jenkins 'Node and Label parameter plugin' to run our java process on a slave node. Job is triggered using 'Build >> Execute Shell' option. The log is collected into a file as below :
sudo java -jar xxx.jar | sudo tee -a ${JOB_NAME}/${BUILD_NUMBER}.log 2>&1
This log file is then pushed to S3 :
sudo aws --region ap-south-1 s3 cp ${JOB_NAME}/${BUILD_NUMBER}.log s3://bucket/JenkinsLogs/${JOB_NAME}/${BUILD_NUMBER}.log
Its working perfectly for us. Hope it helps you too.

In Jenkins, on a Windows remote connected through Cygwin sshd, how to run an sh pipeline step?

We are porting our Jenkins pipeline to work on Windows environments.
The Jenkins' master connects to our Windows remote -named winremote- using Cygwin sshd.
As described on this page, the Remote root directory of the node is given as a plain Windows path (in this case, it is set to C:\cygwin64\home\jenkins\jenkins-slave-dir)
This minimal pipeline example:
node("winremote")
{
echo "Entering Windows remote"
sh "ls -l"
}
fails with the error:
[Pipeline] echo
Entering Windows rmeote
[Pipeline] sh
[C:\cygwin64\home\jenkins\jenkins-slave-dir\workspace\proto-platforms] Running shell script
sh: C:\cygwin64\home\jenkins\jenkins-slave-dir\workspace\proto-platforms#tmp\durable-a739272f\script.sh: command not found
SSHing into the Windows remote, I was able to see that Jenkins actually created workspace subdirectory in C:\cygwin64\home\jenkins\jenkins-slave-dir, but it is left empty.
Is there a known way to use the sh pipeline step on such a remote ?
A PR from blatinville, that was merged a few hours after this question, solves this first issue.
Sadly, it introduces another problem, described in the ticket JENKINS-41225, with the error:
nohup: failed to run command 'sh': No such file or directory
There is a proposed PR for a quickfix of this issue.
Then there is a last problem with how the durable-task-plugin evaluate if a task is still alive using 'ps', with another PR fixing it.
Temporary solution
Until those (or equivalent) fixes are applied, one could compile a Cygwin compatible durable-task-plugin with the following commands:
git clone https://github.com/Adnn/durable-task-plugin.git -b cygwin_fixes
cd durable-task-plugin/
mvn clean install -DskipTests
Which notably generates target/durable-task.hpi file, which can be used to replace the durable-task.jpi file as installed by Jenkins in its plugins folder. It is then required to restart Jenkins.

Why does user jenkins throw an error when running capistrano script from within the Jenkins interface but user root from command line does not?

I am very new to Jenkins and Capistrano. I've found existing documentation to be very fragmented when it comes to running Capistrano from within Jenkins. Having said that, I have successfully installed Jenkins and Capistrano. I've created the items and can trigger the scripts. For my initial test I am just running the default deployment script as it is out of the box.
When I run the following via Jenkins trigger:
#!/bin/bash -exl
export PATH=$PATH:/usr/local/bin
cap staging deploy
I get the error (edited for brevity):
+ USER=jenkins
+ LOGNAME=jenkins
Stage not set, please call something such as `cap production deploy`, where production is a stage you have defined.
*Nothing in /capistrano.log
When I run the same commands inside of a script from the command line I get:
+ USER=root
+ LOGNAME=root
*capistrano.log file:
---
INFO START 2016-09-29 16:50:03 +0000 cap staging deploy
INFO ------------------------------------------------------------------------
---
It appears that the jenkins user is not configured correctly and is missing some dependencies but I can't find how to fix this.
Help is appreciated. If anyone knows of a good resource for using Capistrano together with Jenkins that would also be a huge help.
Turns out the issue was a space in my workspace directory name. The directory name was not being interpreted correctly within the Jenkins script for some reason, but was fine in the bash script I ran from the command line. Thanks to my buddy Scott for figuring that one out!

Execute a script from jenkins pipeline

I have a jenkins pipeline that builds a java artifact,
copies it to a directory and then attempts to execute a external script.
I am using this syntax within the pipeline script to execute the external script
dir('/opt/script-directory') {
sh './run.sh'
}
The script is just a simple docker build script, but the build will fail
with this exception:
java.io.IOException: Failed to mkdirs: /opt/script-directory#tmp/durable-ae56483c
The error is confusing because the script does not create any directories. It is just building a docker image and placing the freshly built java artifact in that image.
If I create a different job in jenkins that executes the external script as
its only build step and then call that job from my pipeline script using this syntax:
build 'docker test build'
everything works fine, the script executes within the other job and the pipeline
continues as expected.
Is this the only way to execute a script that is external to the workspace?
What am I doing wrong with my attempt at executing the script from within
the pipeline script?
The issue is that the jenkins user (or whatever the user is that runs the Jenkins slave process) does not have write permission on /opt and the sh step wants to create the script-directory#tmp/durable-ae56483c sub-directory there.
Either remove the dir block and use the absolute path to the script:
sh '/opt/script-directory/run.sh'
or give write permission to jenkins user to folder /opt (not preferred for security reasons)
Looks like a bug in Jenkins, durable directories are meant to store recovery information e.g. before executing an external script using sh.
For now all you can do is make sure that /opt/script-directory has +r +w and +x set for jenkins user.
Another workaround would be not to change the current directory, just execute sh with it:
sh '/opt/script-directory/run.sh'
I had a similar concern when trying to execute a script in a Jenkins pipeline using a Jenkinsfile.
I was trying to run a script restart_rb.sh with sudo.
To run it I specified the present working directory ($PWD):
sh 'sudo sh $PWD/restart_rb.sh'

Resources