I have a step in my pipeline which does this:
sh("shmig -m ${app_root}/${migration_folder} -t mysql -H $mysql_server -l $USERNAME -p $PASSWORD -d $schema up")
It works fine but sometime I get this error:
java.io.NotSerializableException: org.jenkinsci.plugins.workflow.job.WorkflowJob
Nothing change between build and I don't understand this error.
Have you any idea ?
For more information about the call, it is done like this:
node('docker') {
step('shmig') {
smhig()
}
}
def smhig() {
...
sh("shmig -m ${app_root}/${migration_folder} -t mysql -H $mysql_server -l $USERNAME -p $PASSWORD -d $schema up")
}
Are there any variable declarations/assignments before that 'sh("shmig -m ...)' line? I used to face the same error and now it is gone after I replaced all the variable declarations from
myVar = myVal
to
def myVar = myVal
Not sure if that can help but I hope so.
Related
I have password that looks like this KMqJH9OL?LoNw:w=ZgD1;?zLrH<c. Now when I am trying to input this password into the groovy script it breaks because of ; and <.
I have tried different ways to escape this non of it worked. In the below code BITBUCKET_PASS is the one having the above password.
sh '''
yarn cov-report -c ${BUILD_VERSION} -u ${BITBUCKET_USER} -p ${BITBUCKET_PASS} $PWD/backend/test_report/lcov.info
'''
Here's the code that I have tried and it didn't work.
sh '''
yarn cov-report -c ${BUILD_VERSION} -u ${BITBUCKET_USER} -p "${BITBUCKET_PASS}" $PWD/backend/test_report/lcov.info
'''
Please look at the String interpolation in Groovy scripts :
Instead using {} try like this :
sh '''
yarn cov-report -c $BUILD_VERSION -u $BITBUCKET_USER -p $BITBUCKET_PASS $PWD/backend/test_report/lcov.info
'''
Ansible v2.11.x
I have a Jenkins pipeline that does this. All the $VARIABLES are passed-in from the job's parameters.
withCredentials([string(credentialsId: "VAULT_PASSWORD", variable: "VAULT_PASSWORD")]) {
stage("Configure $env.IP_ADDRESS") {
sh """
ansible-playbook -i \\"$env.IP_ADDRESS,\\" \
-e var_host=$env.IP_ADDRESS \
-e web_branch=$env.WEB_BRANCH \
-e web_version=$env.WEB_VERSION \
site.yml
"""
}
}
My playbook is this
---
- hosts: "{{ var_host | default('site') }}"
roles:
- some_role
I have a groups_vars/all.yml file meant to be used by ad-hoc inventories like this. When I run the pipeline, I simply get the following, and the run does nothing
22:52:29 + ansible-playbook -i "10.x.x.x," -e var_host=10.x.x.x -e web_branch=development -e web_version=81cdedd6fe-20210811_2031 site.yml
22:52:31 [WARNING]: Could not match supplied host pattern, ignoring: 10.x.x.x
If I go on the build node and execute exactly the same command, it works. I can also execute the same command on my Mac, and it works too.
So why does the ad-hoc inventory not work when executed in the pipeline?
This post gave me a clue
The correct syntax that worked for me is
withCredentials([string(credentialsId: "VAULT_PASSWORD", variable: "VAULT_PASSWORD")]) {
stage("Configure $env.IP_ADDRESS") {
sh """
ansible-playbook -i $env.IP_ADDRESS, \
-e var_host=$env.IP_ADDRESS \
-e web_branch=$env.WEB_BRANCH \
-e web_version=$env.WEB_VERSION \
site.yml
"""
}
}
I have a requirement where I need to send the status of the jenkins slave to influxdb. To do so I need to run a curl command from Jenkins Groovy script.
My script looks like this :
int value=0;
for (Node node in Jenkins.instance.nodes) {
if (!node.toComputer().online){
value=1;
}
else{
value=0;
}
curl -i -XPOST http://localhost:8086/write?db=jenkins_db&u=user&p=pass --data-binary 'mymeas,tag=$node.nodeName status=$value'
But after running the script values do not appear in influxdb.
Any Idea what might be wrong here?
PS I also tried
def response = [ 'bash', '-c', "curl", "-i", "-XPOST", "http:/localhost:8086/write?db=jenkins_db&u=user&p=pass", "--data-binary", "\'mymeas tag=$node.nodeName status=$value"\' ].execute().text
You just need to echo your curl command
echo curl -i -XPOST http://localhost:8086/write?db=jenkins_db&u=user&p=pass --data-binary 'mymeas,tag=$node.nodeName status=$value'
I am trying to create a Jenkins pipeline where I need to execute multiple shell commands and use the result of one command in the next command or so. I found that wrapping the commands in a pair of three single quotes ''' can accomplish the same. However, I am facing issues while using pipe to feed output of one command to another command. For example
stage('Test') {
sh '''
echo "Executing Tests"
URL=`curl -s "http://localhost:4040/api/tunnels/command_line" | jq -r '.public_url'`
echo $URL
RESULT=`curl -sPOST "https://api.ghostinspector.com/v1/suites/[redacted]/execute/?apiKey=[redacted]&startUrl=$URL" | jq -r '.code'`
echo $RESULT
'''
}
Commands with pipe are not working properly. Here is the jenkins console output:
+ echo Executing Tests
Executing Tests
+ curl -s http://localhost:4040/api/tunnels/command_line
+ jq -r .public_url
+ URL=null
+ echo null
null
+ curl -sPOST https://api.ghostinspector.com/v1/suites/[redacted]/execute/?apiKey=[redacted]&startUrl=null
I tried entering all these commands in the jenkins snippet generator for pipeline and it gave the following output:
sh ''' echo "Executing Tests"
URL=`curl -s "http://localhost:4040/api/tunnels/command_line" | jq -r \'.public_url\'`
echo $URL
RESULT=`curl -sPOST "https://api.ghostinspector.com/v1/suites/[redacted]/execute/?apiKey=[redacted]&startUrl=$URL" | jq -r \'.code\'`
echo $RESULT
'''
Notice the escaped single quotes in the commands jq -r \'.public_url\' and jq -r \'.code\'. Using the code this way solved the problem
UPDATE: : After a while even that started to give problems. There were certain commands executing prior to these commands. One of them was grunt serve and the other was ./ngrok http 9000. I added some delay after each of these commands and it solved the problem for now.
The following scenario shows a real example that may need to use multiline shell commands. Which is, say you are using a plugin like Publish Over SSH and you need to execute a set of commands in the destination host in a single SSH session:
stage ('Prepare destination host') {
sh '''
ssh -t -t user#host 'bash -s << 'ENDSSH'
if [[ -d "/path/to/some/directory/" ]];
then
rm -f /path/to/some/directory/*.jar
else
sudo mkdir -p /path/to/some/directory/
sudo chmod -R 755 /path/to/some/directory/
sudo chown -R user:user /path/to/some/directory/
fi
ENDSSH'
'''
}
Special Notes:
The last ENDSSH' should not have any characters before it. So it
should be at the starting position of a new line.
use ssh -t -t if you have sudo within the remote shell command
I split the commands with &&
node {
FOO = world
stage('Preparation') { // for display purposes
sh "ls -a && pwd && echo ${FOO}"
}
}
The example outputs:
- ls -a (the files in your workspace
- pwd (location workspace)
- echo world
Here is the shell script I am trying to run. It works when just run as a command but getting errors when run from the script.
#!/bin/bash
# sets CE IP addresses to act as LUS on pgsql
#Checks that user is logged in as root
if [ $(id -u) = "0" ]; then
#Asks user for IP of CE1
echo -n "Enter the IP address of your first CE's management module > "
read CE1
$(psql -U asm -d asm -t -c) echo """update zr_fsinstance set lu_order='1' where managementaccesspointhostname = '$CE1';"""
echo "LUS seetings have been completed"
else
#Warns user of error and sends status to stderr
echo "You must be logged in as root to run this script." >&2
exit 1
fi
Here is the error:
psql: option requires an argument -- 'c'
Try "psql --help" for more information.
update zr_fsinstance set lu_order='1' where managementaccesspointhostname = '10.134.39.139';
Instead of
$(psql -U asm -d asm -t -c) echo
"""update zr_fsinstance
set lu_order='1' where managementaccesspointhostname = '$CE1';"""
Try:
$(psql -U asm -d asm -t -c "UPDATE zr_fsinstance set lu_order='1'
where managementaccesspointhostname = ${CE1};")
OR (if you prefer):
`psql -U asm -d asm -t -c "UPDATE zr_fsinstance set lu_order='1'
where managementaccesspointhostname = ${CE1};"`