TL;DR I want to use the sh step even though Jenkins is running on Windows. I do not want to use the bat step, unless you can show me how to easily reproduce what I need done using bat
I've been converting some old Jenkins jobs over to 2.x Pipeline script. One of my jobs uses the "Publish over SSH plugin" to:
Send artifacts to a remote server
Exec a set of commands on the remote server
For instance:
I wanted to replicate this in Pipeline Script so I've done the following:
stage('Deploy') {
withCredentials([[$class: 'FileBinding', credentialsId: 'bitbucket-key-file', variable: 'SSHKEY']]) {
sh '''
scp -i "$SSHKEY" dsub.tar.gz tprmbbuild#192.168.220.57:dsubdeploy
scp -i "$SSHKEY" deployDsubUi.sh tprmbbuild#192.168.220.57:dsubdeploy
ssh -i "$SSHKEY" -o StrictHostKeyChecking=no 192.168.220.57 <<- EOF
DEPLOY_DIR=/home/tprmbbuild/dsubdeploy
echo '*** dos2unix using sed'
sed -e 's/\r$//' $DEPLOY_DIR/deployDsubUi.sh > $DEPLOY_DIR/deployDsubUi-new.sh
mv $DEPLOY_DIR/deployDsubUi-new.sh $DEPLOY_DIR/deployDsubUi.sh
chmod 755 $DEPLOY_DIR/deployDsubUi.sh
echo '*** Deploying Dsub UI'
$DEPLOY_DIR/deployDsubUi.sh $DEPLOY_DIR/dsub.tar.gz
EOF'''
}
}
Problem is, I get this stack trace when executing my build:
[Pipeline] sh
[E:\Jenkins\jenkins_home\workspace\tpr-ereg-ui-deploy#2] Running shell script
1 [main] sh 3588 E:\Jenkins\tools\Git_2.10.1\usr\bin\sh.exe: *** fatal error - add_item ("\??\E:\Jenkins\tools\Git_2.10.1", "/", ...) failed, errno 1
Stack trace:
Frame Function Args
000FFFF9BB0 0018005C24E (0018023F612, 0018021CC39, 000FFFF9BB0, 000FFFF8B30)
000FFFF9BB0 001800464B9 (000FFFFABEE, 000FFFF9BB0, 1D2345683BEC046, 00000000000)
000FFFF9BB0 001800464F2 (000FFFF9BB0, 00000000001, 000FFFF9BB0, 4A5C3A455C3F3F5C)
000FFFF9BB0 001800CAA8B (00000000000, 000FFFFCE00, 001800BA558, 1D234568CAFA549)
000FFFFCC00 00180118745 (00000000000, 00000000000, 001800B2C5E, 00000000000)
000FFFFCCC0 00180046AE5 (00000000000, 00000000000, 00000000000, 00000000000)
00000000000 00180045753 (00000000000, 00000000000, 00000000000, 00000000000)
000FFFFFFF0 00180045804 (00000000000, 00000000000, 00000000000, 00000000000)
End of stack trace
Agreed with "it is my belief it is failing to spawn the shell". It is trying to run "E:\Jenkins\tools\Git_2.10.1\usr\bin\sh.exe" (using Windows backslash syntax). Unless we have a shell executable configured (sh.exe) in the mentioned directory, it will fail.
Powershell (or Cmd Shell):
If you are oepn to use batch files, you would have to install/configure 3 binaries (ssh, scp, ssh). Everything else falls in place (I see that you are channeling commands to a remote machine using ssh. I assume that the remote server is linux/unix based).
Alternatives:
You can use cygwin or run linux on virtualbox (or any software that emulates linux on windows). But, running just 3 commands may not be worth the trouble (It will definitely be fruitful if you have plans to convert/write more shell scripts in future).
You can use "bat" instead of "sh" in windows.
Also use 2 backslashes to escape the path string correctly. See example below
node {
currentBuild.result = "SUCCESS"
try {
stage('Checkout'){
checkout scm
}
stage('Convert to Binary RPD'){
bat "D:\\oracle\\Middleware\\user_projects\\domains\\bi\\bitools\\bin\\biserverxmlexec -D .\\RPD -P Gl081Reporting -O .\\GLOBI.rpd"
}
stage('Notify'){
echo 'sending email'
// send to email
emailext (
subject: "SUCCESS: Job '${env.JOB_NAME} [${env.BUILD_NUMBER}]'",
body: """$PROJECT_NAME - Build # $BUILD_NUMBER - $BUILD_STATUS:
Check console output at $BUILD_URL to view the results.""",
to:"girish.lakshmanan#abc.co.uk girish.la#gmail.com"
)
}
}
catch (err) {
currentBuild.result = "FAILURE"
throw err
}
}
You should be able to run the scp.exe from the git installation directly within a batch script.
There is no "here" document for batch as far as I know, but you could just put the script to be run on the server into a separate file.
(Untested)
stage('Deploy') {
withCredentials([[$class: 'FileBinding', credentialsId: 'bitbucket-key-file', variable: 'SSHKEY']]) {
bat '''
E:\\Jenkins\\tools\\Git_2.10.1\\usr\\bin\\scp.exe -i "${SSHKEY}" dsub.tar.gz tprmbbuild#192.168.220.57:dsubdeploy
E:\\Jenkins\\tools\\Git_2.10.1\\usr\\bin\\scp.exe -i "${SSHKEY}" deployDsubUi.sh tprmbbuild#192.168.220.57:dsubdeploy
E:\\Jenkins\\tools\\Git_2.10.1\\usr\\bin\\scp.exe -i "${SSHKEY}" -o StrictHostKeyChecking=no 192.168.220.57 < server_script.sh
}
}
Related
I have a legacy project in Jenkins that hast to be pipelined (for
later parallelization), hence moving from simple tcsh script to
pipeline
running the script as
#!/bin/tcsh
source ./mysetting.sh
update
works but the same pipeline step fails due to missing alias expansion
stage ('update') {
steps {
//should be working but alias expansion fails
sh 'tcsh -c "source ./mysettings.sh; alias; update"'
//manually expanding the alias works fine
sh 'tcsh -c "source ./mysettings.sh; alias; python update.py;"'
}
}
calling alias in the steps properly lists all the set aliases, so I
can see them, but not use them.
I know in bash alias expansion has to be set
#enable shell option for alias_expansion
shopt -s expand_aliases
but in csh/tcsh that should be taken care of by source.
what am I missing?
found the solution:
sh '#!/bin/tcsh \n' +
'source ./mysettings.sh \n' +
'echo "Calling my alias" \n' +
'my_alias \n'
every line starting with sh launches a new shell, so it has to be in one line including line breaks.
further adding to the confusing was that documentation of jenkins says that it starts "a bash" but it launched /bin/sh which in my case pointed to something else
While running, my pipeline is duplicating the binaries located in a BitBucket workspace into the build workspace, then needs to add in the build workspace some secret files from the credential store, and then start to build the docker image.
But the pipeline is failing when copying the files.
I searched and applied different solutions found here but still have the same error.
Running the following command :
stage('push credential in jenkins workspace') {
steps {
script {
withCredentials([
file(credentialsId: 'saremediation', variable: 'SA_KEY_PATH')]){
sh "ls -al"
sh "mkdir ${CERTIFDIR}"
sh "cp ${SA_KEY_PATH} ${CERTIFDIR}/credent.json"
}
}
}
}
failed with the following error :
[Pipeline] sh
Warning: A secret was passed to "sh" using Groovy String interpolation, which is insecure.
Affected argument(s) used the following variable(s): [SA_KEY_PATH]
See https://jenkins.io/redirect/groovy-string-interpolation for details.
+ cp **** server/src/configuration/certificats/credent.json
cp: target 'server/src/configuration/certificats/credent.json' is not a directory
the CERTIFDIR folder is well created, because when I add sh "ls -al ${CERTIFDIR}", I cans see that the folder is created and empty.
fix the problem by applyong this syntax in the cp command
sh "cp \"${SA_KEY_PATH}\" \"${CERTIFDIR}\""
I have a requirement to implement distributed performance testing where I have a chance of launching multiple slave node parallelly when user count is high. Hence I suppose to launch master and slave nodes.
I have tried all the way to start jmeter-server in the background since it has to keep on running in the slave node to receive the incoming request.
But still, I am unable to start it in the background.
node(performance) {
properties([disableConcurrentBuilds()])
stage('Setup') {
cleanAndInstall()
checkout()
}
max_instances_to_boot = 1
for (val in 1..max_instances_to_boot) {
def instance_id = val
node_builder[instance_id] = {
timestamps {
node(node_label) {
stage('Node -> ' + instance_id + ' Launch') {
def ipAddr = ''
script {
ipAddr = sh(script: 'curl http://xxx.xxx.xxx.xxx/latest/meta-data/local-ipv4', returnStdout: true)
node_ipAddr.add(ipAddr)
}
cleanAndInstall()
checkout()
println "Node IP Address:"+node_ipAddr
dir('apache-jmeter/bin') {
exec_cmd = "nohup sh jmeter-server -Jserver.rmi.ssl.disable=true -Djava.rmi.server.hostname=$ipAddr > ${env.WORKSPACE}/jmeter-server-nohup.out &"
println 'Server Execution Command: ' + exec_cmd
sh exec_cmd
}
sleep time: 1, unit: 'MINUTES'
sh """#!/bin/bash
echo "============ jmeter-server.log ============"
cat jmeter-server.log
echo "============ nohup.log ============"
cat jmeter-server-nohup.out
"""
}
}
}
}
}
parallel node_builder
stage('Execution') {
exec_cmd = "apache-jmeter/bin/jmeter -n -t /home/jenkins/workspace/release-performance-tests/test_plans/delights/fd_regression_delight.jmx -e -o /home/jenkins/workspace/release-performance-tests/Performance-Report -l /home/jenkins/workspace/release-performance-tests/JTL-FD-773.jtl -R xx.0.3.210 -Jserver.rmi.ssl.disable=true -Dclient.tries=3"
println 'Execution Command: ' + exec_cmd
sh exec_cmd
}
}
Getting following error
Error in rconfigure() method java.rmi.ConnectException: Connection refused to host: xx.0.3.210; nested exception is:
java.net.ConnectException: Connection refused (Connection refused)
We're unable to provide the answer without seeing the contents of your nohup.out file which is supposed to contain your script output.
Blind shot: by default JMeter uses secure communication between the master and the slaves so you need to have a Java Keystore to contain certificates necessary for the requests encryption. The script is create-rmi-keystore.sh and you need to launch and perform the configuration prior to starting the JMeter Slave.
If you don't need encrypted communication between master and slaves you can turn this feature off so you won't to create the keystore, it can be done either by adding the following command-line argument:
server.rmi.ssl.disable=true
like:
nohup jmeter-server -Jserver.rmi.ssl.disable=true &
or alternatively add the next line to user.properties file (lives in "bin" folder of your JMeter installation)
server.rmi.ssl.disable=true
More information:
Configuring JMeter
Apache JMeter Properties Customization Guide
Remote hosts and RMI configuration
This is resolve by adding inside the node stage.
JENKINS_NODE_COOKIE=dontKillMe nohup sh jmeter-server -Jserver.rmi.ssl.disable=true -Djava.rmi.server.hostname=xx.xx.xx.xxx > ${env.WORKSPACE}/jmeter-server-nohup.out &
I am stuck in trying to get a Jenkinsfile to work. It keeps failing on sh step and gives the following error
process apparently never started in /home/jenkins/workspace
...
(running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
I have tried adding
withEnv(['PATH+EXTRA=/usr/sbin:/usr/bin:/sbin:/bin'])
before sh step in groovy file
also tried to add
/bin/sh
in Manage Jenkins -> Configure System in the shell section
I have also tried replacing the sh line in Jenkinsfile with the following:
sh "docker ps;"
sh "echo 'hello';"
sh ./build.sh;"
sh ```
#!/bin/sh
echo hello
```
This is the part of Jenkinsfile which i am stuck on
node {
stage('Build') {
echo 'this works'
sh 'echo "this does not work"'
}
}
expected output is "this does not work" but it just hangs and returns the error above.
what am I missing?
It turns out that the default workingDir value for default jnlp k8s slave nodes is now set to /home/jenkins/agent and I was using the old value /home/jenkins
here is the config that worked for me
containerTemplate(name: 'jnlp', image: 'lachlanevenson/jnlp-slave:3.10-1-alpine', args: '${computer.jnlpmac} ${computer.name}', workingDir: '/home/jenkins/agent')
It is possible to get the same trouble with the malformed PATH environment variable. This prevents the sh() method of the Pipeline plugin to call the shell executable. You can reproduce it on a simple pipeline like this:
node('myNode') {
stage('Test') {
withEnv(['PATH=/something_invalid']) {
/* it hangs and fails later with "process apparently never started" */
sh('echo Hello!')
}
}
}
There is variety of ways to mangle PATH. For example you use withEnv(getEnv()) { sh(...) } where getEnv() is your own method which evaluates the list of environment variables depending on the OS and other conditions. If you make a mistake in the getEnv() method and PATH gets overwritten you get it reproduced.
I am using Jenkins to take a number of parameters, generate an ansible-playbook command and run it. My Jenkins server is also my Ansible server.
My shell says ::
echo $ESXi_IP
echo $VM_NAME
echo $NIC1_MAC
echo $NIC2_MAC
echo $NIC3_MAC
echo $NIC4_MAC
echo $ESXi_HOSTNAME
echo $PLAYBOOK
ansible-playbook $PLAYBOOK --extra-vars "esxi_ip=$ESXi_IP vm_name=$VM_NAME nic1_mac=$NIC1_MAC nic2_mac=$NIC2_MAC nic3_mac=$NIC3_MAC nic4_mac=$NIC4_MAC esxi_hostname=$ESXi_HOSTNAME"
When I run the Job, the output is ::
+ ansible-playbook /root/ansible/sc-ece.yaml --extra-vars 'esxi_ip=5.232.66.49 vm_name=JenkinsTest nic1_mac=00:50:C0:A8:01:02 nic2_mac=00:50:0A:A9:37:A5 nic3_mac=00:50:0A:FF:FE:4C nic4_mac=00:50:AC:10:01:65 esxi_hostname=tmolab13-14iamesxi4'
ERROR! the playbook: /root/ansible/sc-ece.yaml could not be found
The playbook path is correct. there is no issue in it at all.
What seems to be missing here ?
You are correct Matt & Dave. Permissions to the folder was an issue. Thanks !