I try to invoke the below salt script using Jenkins:
create_script:
file.managed:
- name: /tmp/broc/import_props.sh
- source: salt://projects/broc/jboss/files/import.sh.jinja
- template: jinja
Import_properties:
cmd.script:
- name: /tmp/broc/import.sh
- cwd: /tmp/broc`
The Jenkins console output is:
`ID: create_script
Function: file.managed
Name: /tmp/broc/import.sh
Result: True
Comment: File /tmp/broc/import.sh updated
Started: 11:31:13.736928
Duration: 166.319 ms
Changes:
----------
diff:
New file
mode:
0644
ID: Import_properties
Function: cmd.script
Name: /tmp/broc/import.sh
Result: False
Comment: Command '/tmp/broc/import.sh' run
Started: 11:31:13.903378
Duration: 399.825 ms
Changes:
----------
pid:
8292
retcode:
1`
And Jenkins build finished success:
`Succeeded: 21 (changed=22)
Failed: 1
Total states run: 22
Total run time: 30.338 s"}}]
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS`
My question is a saltstack job ID Import_properties Result is False. So a Jenkins build should also finish as FAILURE. In the above case the saltstack result is ignored and build finished SUCCESS. Is there a way the jenkins build be made FAILURE based on saltstack Result?
I see the below Jenkins pipeline:
`try{
saltCmd = "\"salt -E \"($target)\" \ state.apply projects.alip.process-server \
pillar=\'{\"region\": \"${Region}\",\"siteid\":\"${SiteID}\",\"dbuser\":\"${DBUSER}\",\"dbpass\":\"${DBPASS}\"}\' \""
result = salt authtype: 'pam',
clientInterface: local(
arguments: saltCmd,
blockbuild: true,
function: 'cmd.run',
target: "$my_salttarget",
saveFile: true,
targettype: 'glob'),
credentialsId: "$my_saltcred",
servername: "$my_saltserver"
}
}catch(e){
result = e.toString()
currentBuild.result = 'FAILURE'
}finally{
echo result.replace("\\n",'\n')
}
}`
I am new to Jenkins pipeline script, can you help suggest inputs for adding a post build steps under finally to parse the Jenkins console output, identify a string and if it matches mark the build failure. This is similar to a text finder plugin except that we write a pipeline script.
Even if your state failed, Jenkins believe the salt command it evoked still ran successfully.
Jenkins is not able to detect the errors inside salt itself.
It can only tell if salt runs successfully or not.
This is the same as when you ran salt command line. Even if the state fail, the shell command salt still returns 0.
Your issue here is (as said by others) due to jenkins using the return code if the salt command itself and NOT the return code of the action taken by the state you applied.
My 2 cents here is the
--retcode-passthrough
option you can pass to your salt command.
This option allows the salt command return code to match the action taken by the state.
Simply put, if anything fails in a state then the salt command will return a failure return code.
Official doc here
Related
I'm running a basic pipeline that executes pylint on my repository code.
My Jenkins runs on Debian etch, the Jenkins version is 2.231.
At the end of the pipeline, I'm getting the following error :
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 24
Finished: FAILURE
As this page https://wiki.jenkins.io/display/JENKINS/Job+Exit+Status explains, the error code 24 refers to a "too many open files" error.
If I remove the pylint part of the job, the pipelines executes smoothly.
Il tried to configure the limit in /etc/security/limits.conf
jenkins soft nofile 64000
jenkins hard nofile 64000
and in the Jenkins config file /etc/default/jenkins :
MAXOPENFILES=64000
If I put "ulimit -n" in the pipeline, the configured value is displayed but it has no effect on the result that remains : ERROR: script returned exit code 24.
The problem comes from pylint that doesn't return a 0 code even if it ran successfully.
The solution is to use the --exit-zero option when running pylint.
pylint --exit-zero src
Jenkins with pylint gives build failure
I have a jenkins job.
It is pretty simple: pull from git, and run the build.
The build is just one step:
Execute window command batch
In my use case, I will need to run some python scripts.
Some will fail, some others will not.
python a.py
python b.py
What does determine the final status of the build?
It seems I can edit that by:
echo #STABLE > build.proprieties
but how are the STABLE/UNSTABLE status assigned if not specified by the user?
What happens if b.py raise an error and fails?
Jenkins interprets a pipeline as failed if a command returns an exit code unequal zero.
Internally the build status is set with currentBuild.currentResult which can have three values: SUCCESS, UNSTABLE, or FAILURE.
If you want to control the failure / success of your pipeline yourself you can catch exceptions / exit codes and manually set the value for currentBuild.currentResult. Plugins also use this attribute to change the result of the pipeline.
For example:
stage {
steps {
script {
try {
sh "exit 1" // will fail the pipeline
sh "exit 0" // would be marked as passed
currentBuild.currentResult = 'SUCCESS'
} catch (Exception e) {
currentBuild.currentResult = 'FAILURE'
// or currentBuild.currentResult = 'UNSTABLE'
}
}
}}
I am stuck in trying to get a Jenkinsfile to work. It keeps failing on sh step and gives the following error
process apparently never started in /home/jenkins/workspace
...
(running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
I have tried adding
withEnv(['PATH+EXTRA=/usr/sbin:/usr/bin:/sbin:/bin'])
before sh step in groovy file
also tried to add
/bin/sh
in Manage Jenkins -> Configure System in the shell section
I have also tried replacing the sh line in Jenkinsfile with the following:
sh "docker ps;"
sh "echo 'hello';"
sh ./build.sh;"
sh ```
#!/bin/sh
echo hello
```
This is the part of Jenkinsfile which i am stuck on
node {
stage('Build') {
echo 'this works'
sh 'echo "this does not work"'
}
}
expected output is "this does not work" but it just hangs and returns the error above.
what am I missing?
It turns out that the default workingDir value for default jnlp k8s slave nodes is now set to /home/jenkins/agent and I was using the old value /home/jenkins
here is the config that worked for me
containerTemplate(name: 'jnlp', image: 'lachlanevenson/jnlp-slave:3.10-1-alpine', args: '${computer.jnlpmac} ${computer.name}', workingDir: '/home/jenkins/agent')
It is possible to get the same trouble with the malformed PATH environment variable. This prevents the sh() method of the Pipeline plugin to call the shell executable. You can reproduce it on a simple pipeline like this:
node('myNode') {
stage('Test') {
withEnv(['PATH=/something_invalid']) {
/* it hangs and fails later with "process apparently never started" */
sh('echo Hello!')
}
}
}
There is variety of ways to mangle PATH. For example you use withEnv(getEnv()) { sh(...) } where getEnv() is your own method which evaluates the list of environment variables depending on the OS and other conditions. If you make a mistake in the getEnv() method and PATH gets overwritten you get it reproduced.
I have a job which is compatible with 2 slaves(configured on the different locations). I often experience connectivity issues due to VPN session timeout so I am trying to figure out a way to automatically run a job on the slave 2 if the job gets fail on slave 1. Please let me know if there is any plugin or any way to accomplish it.
I think with a free style project, it would be hard to implement your requirement.
Pipeline script
Check this if you don't know this plugin : How create a pipeline script
According to this answer, the Pipeline Plugin allows you to write jobs that run on multiple slave nodes using labels:
node('linux') {
git url: 'https://github.com/jglick/simple-maven-project-with-tests.git'
sh "make"
step([$class: 'ArtifactArchiver', artifacts: 'build/program', fingerprint: true])
}
node('windows && amd64') {
git url: 'https://github.com/jglick/simple-maven-project-with-tests.git'
sh "mytest.exe"
}
I created this simple pipeline script and work (this example does not have label, but you could use it):
def exitStatusInMasterNode = 'success';
node {
echo 'Hello World in node master'
echo 'status:'+exitStatusInMasterNode
exitStatusInMasterNode = 'failure'
}
node {
echo 'Hello World in node slave'
echo 'master status:'+exitStatusInMasterNode
}
exitStatusInMasterNode variable could be shared across nodes.
So if your slave1 fail, you could set exitStatusInMasterNode to failure. And at the start of your slave2, you could validate if exitStatusInMasterNode is failure in order to execute the same build but in this slave.
Example:
def exitStatusInMasterNode = 'none';
node {
try{
echo 'Hello World in Slave-1'
throw new Exception('Simulating an error')
exitStatusInMasterNode = 'success'
} catch (err) {
echo err.message
exitStatusInMasterNode = 'failure'
}
}
node {
if(exitStatusInMasterNode == 'success'){
echo 'Job in slave 1 was success. Slave-2 will not be executed'
currentBuild.result = 'SUCCESS'
return;
}
echo 'Re launch the build in Slave-2 due to failure on Slave-1'
// exec simple tasks or stages
}
Log of simulated error in slave1
Running on Jenkins in .../multiple_nodes
Hello World in Slave-1
Simulating an error
Running on Jenkins in .../multiple_nodes
Re launch the build in Slave-2 due to failure on Slave-1
Finished: SUCCESS
Log when there is not error in slave1 (comment this line: throw new Exception)
Running on Jenkins in .../multiple_nodes
Hello World in Slave-1
Running on Jenkins in .../multiple_nodes
Job in slave 1 was success. Slave-2 will not be executed
Finished: SUCCESS
I am trying to implement Machine learning in my jenkins pipeline.
For that I need output data of pipeline for each build.
Some parameters that i need are:
Which user triggered the pipeline
Duration of pipeline
Build number with its details
Pipeline pass/fail
If fail, at which stage it failed.
Error in the failed stage. (Why it failed)
Time required to execute each stage
Specific output of each stage (For. eg. : If a stage contains sonarcube execution then output be kind of percentage of codesmells or code coverage)
I need to fetch these details for all builds. How can get it?
There is jenkins api that can be implemented in python but i was able to get only JOB_NAME, Description of job, IS job Enabled.
These details werent useful.
There are 2 ways to get some of data from your list.
1. Jenkins API
For first 4 points from the list, you can use JSON REST API for a specific build to get those data. Example API endpoint:
https://[JENKINS_HOST]/job/[JOB_NAME]/[BUILD_NUMBER]/api/json?pretty=true
1. Which user triggered the pipeline
This will be under actions array in response, identyfi object in array by "_class": "hudson.model.CauseAction" and in it you will have shortDescription key which will have that information:
"actions": [
{
"_class": "hudson.model.CauseAction",
"causes": [
{
"_class": "hudson.triggers.SCMTrigger$SCMTriggerCause",
"shortDescription": "Started by an SCM change"
}
]
},
2. Duration of pipeline
It can be found under key: "duration". Example
"duration": 244736,
3. Build number with its details
I don't know what details you need, but for build number look for "number" key:
"number": 107,
4. Pipeline pass/fail
"result": "SUCCESS",
If you need to extract this information for all builds, run GET request for job API https://[JENKINS_HOST]/job/[JOB_NAME]/api/json?pretty=trueand extract all builds, then run above-mentioned request per build you have extracted.
I will write later a dummy python script to do just that.
2. Dump data in Jenkinsfile
There is also a possibility to dump some that information from Jenkinfile in post action.
pipeline {
agent any
stages {
stage('stage 1') {
steps {
sh 'echo "Stage 1 time: ${YOUR_TIME_VAR}" > job_data.txt'
}
}
}
post {
always {
sh 'echo "Result: ${result}" > job_data.txt'
sh 'echo "Job name: ${displayName}" > job_data.txt'
sh 'echo "Build number: ${number}" > job_data.txt'
sh 'echo "Duration: ${duration}" > job_data.txt'
archiveArtifacts artifacts: 'job_data.txt', onlyIfSuccessful: false
}
}
}
List of available global variables for pipeline job can be found:
https://[JENKINS_HOST]/pipeline-syntax/globals#env
For rest, you will need to implement your own logic in Jenkinsfile.
Ad. 5
Create a variable which holds information about current stage. At the beginning of each stage change its value to the ongoing stage. At the end dump to file like rest variables. If pipeline will fail let's say on stage foo in post action this variable will have exact same value because if pipeline fails it won't go to next stage.
Ad. 6
I'm not sure what you want, a traceback, error code?
I guess you will probably need to implement your own logging function.
Ad. 7
Make a function for measuring time for each stage and dump value at the end.
Ad. 8
Also not sure what you mean. Like, build artifacts?
At the end of each build this file job_data.txt will be archived as build artifact which can be later downloaded.
If i will find more elegant and simple solution I'll edit this post.
Hope it helps in any way
EDIT 1
Here is the script I've mentioned earlier.
import requests
username = "USERNAME"
password = "PASSWORD"
jenkins_host = "JENKINS_HOST"
jenkins_job = "JOBNAME"
request_url = "{0:s}/job/{1:s}/api/json".format(
jenkins_host,
jenkins_job,
)
job_data = requests.get(request_url, auth=(username, password)).json()
builds = []
for build in job_data.get('builds'):
builds.append(build.get('number'))
for build in builds:
build_url = "{0:s}/job/{1:s}/{2:d}/api/json".format(
jenkins_host,
jenkins_job,
build,
)
build_data = requests.get(build_url, auth=(username, password)).json()
build_name = build_data.get('fullDisplayName')
build_number = build_data.get('number')
build_status = build_data.get('result')
build_duration = build_data.get('duration')
for action in build_data.get('actions'):
if action.get("_class") == "hudson.model.CauseAction":
build_trigger = action.get('causes')
print(build_name)
print(build_status)
print(build_duration)
print(build_number)
print(build_trigger)
Please note you might need to authorize with API Token depending on your security settings.