In Jenkins, on a Windows remote connected through Cygwin sshd, how to run an sh pipeline step? - jenkins

We are porting our Jenkins pipeline to work on Windows environments.
The Jenkins' master connects to our Windows remote -named winremote- using Cygwin sshd.
As described on this page, the Remote root directory of the node is given as a plain Windows path (in this case, it is set to C:\cygwin64\home\jenkins\jenkins-slave-dir)
This minimal pipeline example:
node("winremote")
{
echo "Entering Windows remote"
sh "ls -l"
}
fails with the error:
[Pipeline] echo
Entering Windows rmeote
[Pipeline] sh
[C:\cygwin64\home\jenkins\jenkins-slave-dir\workspace\proto-platforms] Running shell script
sh: C:\cygwin64\home\jenkins\jenkins-slave-dir\workspace\proto-platforms#tmp\durable-a739272f\script.sh: command not found
SSHing into the Windows remote, I was able to see that Jenkins actually created workspace subdirectory in C:\cygwin64\home\jenkins\jenkins-slave-dir, but it is left empty.
Is there a known way to use the sh pipeline step on such a remote ?

A PR from blatinville, that was merged a few hours after this question, solves this first issue.
Sadly, it introduces another problem, described in the ticket JENKINS-41225, with the error:
nohup: failed to run command 'sh': No such file or directory
There is a proposed PR for a quickfix of this issue.
Then there is a last problem with how the durable-task-plugin evaluate if a task is still alive using 'ps', with another PR fixing it.
Temporary solution
Until those (or equivalent) fixes are applied, one could compile a Cygwin compatible durable-task-plugin with the following commands:
git clone https://github.com/Adnn/durable-task-plugin.git -b cygwin_fixes
cd durable-task-plugin/
mvn clean install -DskipTests
Which notably generates target/durable-task.hpi file, which can be used to replace the durable-task.jpi file as installed by Jenkins in its plugins folder. It is then required to restart Jenkins.

Related

"Docker: command not found" from Jenkins on MacOS

When running jobs from Jenkinsfile with Pipeline syntax and a Docker agent, the pipeline fails with "Docker: command not found." I understand this to mean that either (1) Docker is not installed; or (2) Jenkins is not pointing to the correct Docker installation path. My situation is very similar to this issue: Docker command not found in local Jenkins multi branch pipeline . Jenkins is installed on MacOS and running off of localhost:8080. Docker is also installed (v18.06.0-ce-mac70)./
That user's solution included a switch from pipeline declarative syntax to node scripted syntax. However I want to resolve the issue while retaining the declarative syntax.
Jenkinsfile
#!groovy
pipeline {
agent {
docker {
image 'node:7-alpine'
}
}
stages {
stage('Unit') {
steps {
sh 'node -v'
sh 'npm -v'
}
}
}
}
Error message
docker inspect -f . node:7-alpine
docker: command not found
docker pull node:7-alpine
docker: command not found
In Jenkins Global Tool Configuration, for Docker installations I tried both (1) install automatically (from docker.com); and (2) local installation with installation root /usr/local/.
All of the relevant plugins appears to be installed as well.
I solved this problem here: https://stackoverflow.com/a/58688536/8160903
(Add Docker's path to Homebrew Jenkins plist /usr/local/Cellar/jenkins-lts/2.176.3/homebrew.mxcl.jenkins-lts.plist)
I would check the user who is running the jenkins process and make sure they are part of the docker group.
You can try adding the full path of docker executable on your machine to Jenkins at Manage Jenkins > Global Tool Configuration.
I've seen it happen sometimes that the user which has started Jenkins doesn't have the executable's location on $PATH.

How to execute shell commands in jenkins pipeline script on windows machine

node{
def app
stage ("Build Image"){
bat 'cd C:/Users/trivedi2/Desktop/DEV_pipeline/DEV_Workspace'
app = docker.build("CDashboard")
}
}
This is my pipeline code for creating docker images
error while running jenkins job:nohup: failed to run command 'sh': No such file or directory
Can any one help me with this issue. I am using windows machine
First set the env PATH variable in the machine which points out to a sh.exe in Git->bin
Second Try to do a sysmlink to nohup.exe as the error susggest
mklink "C:\Program Files\Git\bin\nohup.exe" "C:\Program Files\git\usr\bin\nohup.exe"
After this setup you can use node{sh "git --version" in your jenkinsfile and it works fine.
https://stackoverflow.com/a/45151156/3648023

Execution a deployment script on a remote ssh server through a Jenkins pipeline

I've got a Jenkins pipeline containing stages for source loading, building and deploying on a remote machine through SSH. The problem is about the last one. I saved the script of the following template on the remote server:
#!/bin/bash
bash /<pathTo>/jboss-cli.sh --command="deploy /<anotherPath>/service.war --force"
It works fine if executed in a terminal connected to the remote server.
The best outcome I've received through Jenkins is
/<pathTo>/jboss-cli.sh: line 87: usr/bin/java/bin/java: No such file or directory
in Jenkins console output.
Tried switching between bash and sh, exporting path to java in the pipeline script etc.
Any suggestions are appreciated.
Thanks!
p.s. Execution call from Jenkins looks like:
sh """
ssh -o StrictHostKeyChecking=no $connectionName 'bash /<pathToTheScript>/<scriptName>.sh'
"""
line 87: **usr/bin/java/bin/java**: No such file or directory
As per error line it is considering path from usr not /usr. Can you check if this is what the problem is?
Sorry, I know this should be in comments section but I don't have right to add comments yet.

Can Jenkins source .bashrc of associated user?

My Jenkins runs inside Tomcat which runs under user buildman, therefore all the jobs run under that user (in CentOS). Some of my jobs depend on environment variables, which I set in .bashrc. However, the Jenkins jobs do not see any of the variables set in that file even though that script is supposed to be sourced for non-login shells, such as what I would think Jenkins should be (?).
The workaround is simple: I just copy and paste all the variables from my .bashrc into the build command script in Jenkins. But that is not very elegant. Is there any way to get Jenkins to source the .bashrc of the user it runs under so that it gets its usual configuration without having to set it separately in each job?
Jenkins creates a temporary sh-script for each script section (at least when using a "classical" project - for the Pipeline approach I'm not sure). This temporary script is executed with sh. On most Linux systems this is a symlink to bash, this SO-post gives some insights.
Also according to the man-pages of bash invoking bash with sh "tries to mimic the startup behavior of historical versions of sh as closely as possible, while conforming to the POSIX standard as well."
This means the .bashrc is not interpreted at all. However you can try to source the bashrc for each shell-invocation...
So, I tried a few things and the only solutions that seems to work are:
have a shell script in your repo that uses bash
write a file, chmod it via sh and then run it
In both case, there needs to be an executable file with content like:
#!/usr/bin/env bash
...
Using sh """ bash -c "...." """" doesn't seem to work
When my Jenkins agent launches by SSH on redhat linux, I see it does print environment variables defined in .bashrc.
My problem with not seeing changes to .bashrc was because I needed to relaunch the agent, so it picked up the change.
I have found a command that works for me
In .profile, .bashrc, etc.:
export MY_BASH_VAR=123
In Execute Shell:
VAR=$(bash -c "source ~/.profile && echo \$MY_BASH_VAR")
echo $VAR
Will print 123 in the output console when the job builds

Execute a script from jenkins pipeline

I have a jenkins pipeline that builds a java artifact,
copies it to a directory and then attempts to execute a external script.
I am using this syntax within the pipeline script to execute the external script
dir('/opt/script-directory') {
sh './run.sh'
}
The script is just a simple docker build script, but the build will fail
with this exception:
java.io.IOException: Failed to mkdirs: /opt/script-directory#tmp/durable-ae56483c
The error is confusing because the script does not create any directories. It is just building a docker image and placing the freshly built java artifact in that image.
If I create a different job in jenkins that executes the external script as
its only build step and then call that job from my pipeline script using this syntax:
build 'docker test build'
everything works fine, the script executes within the other job and the pipeline
continues as expected.
Is this the only way to execute a script that is external to the workspace?
What am I doing wrong with my attempt at executing the script from within
the pipeline script?
The issue is that the jenkins user (or whatever the user is that runs the Jenkins slave process) does not have write permission on /opt and the sh step wants to create the script-directory#tmp/durable-ae56483c sub-directory there.
Either remove the dir block and use the absolute path to the script:
sh '/opt/script-directory/run.sh'
or give write permission to jenkins user to folder /opt (not preferred for security reasons)
Looks like a bug in Jenkins, durable directories are meant to store recovery information e.g. before executing an external script using sh.
For now all you can do is make sure that /opt/script-directory has +r +w and +x set for jenkins user.
Another workaround would be not to change the current directory, just execute sh with it:
sh '/opt/script-directory/run.sh'
I had a similar concern when trying to execute a script in a Jenkins pipeline using a Jenkinsfile.
I was trying to run a script restart_rb.sh with sudo.
To run it I specified the present working directory ($PWD):
sh 'sudo sh $PWD/restart_rb.sh'

Resources