Jenkins Script Console vs Build Agent - jenkins

I'm experiencing some odd behavior with a Jenkins build (Jenkins project is a multi-branch pipeline with the Jenkinsfile provided by the source repository). The last step is to deploy the application which involves replacing an artifact on a remote host and then restarting the process that runs it.
Everything works perfectly except for one problem - the service is no longer running after the build completes. I even added some debugging messages after the restart script to prove with the build output that it really was working. But for some reason, after the build exits the service is no longer running. I've done extensive testing to ensure Jenkins connects to the remote host as the correct user, has the right env vars set, etc. Plus, the restart script output is very detailed in the first place - there would be no way to get the successful output if it didn't actually work. So I am assuming the process that runs the deploy steps on the remote host is doing something else after the build completes execution.
Here is where it gets weird: if I run the same exact deploy commands using the Script Console for the same exact remote host, it works. And the service isn't stopped after successfully starting up.
By "same exact" I mean the script is the same, but the DSL is different between the Script Console and the pipeline. For example, in the Script Console, I use
println "deployscript.sh <args>".execute().text
Whereas in the pipeline I use
pipeline {
agent {
node 'mynode'
}
stages {
/*other stages commented out for testing*/
stage('Deploy') {
steps {
script {
sh 'deployscript.sh <args>'
}
}
}
}
}
I also don't have any issues running the commands manually via SSH.
Does anyone know what is going on here? Is there a difference in how the Script Console vs the Build Agent connects to the remote host? Do either of these processes run other commands? I understand that the SSH session is controlled by a Java process, but I don't know much else about the Jenkins implementation.
If anyone is curious about the application itself, it is a Progress Application Server for OpenEdge (PASOE) instance. The deploy process involves un-deploying the old WAR file, deploying the new one, and then stopping/starting the instance.
UPDATE:
I added 60-second sleep to the end of the deploy script to give me time to test the service before the Jenkins process ended. This was successful, so I am certain that when the Jenkins build process exits is when it causes the service to go down. I am not sure if this is an issue with Jenkins owning a process, but again the Script Console handles this fine...

Found the issue. It's buried away in some low-level Jenkins documentation, but Jenkins builds have a default behavior of killing any processes spawned by the build. This confirms that Jenkins was the culprit and the build indeed was running correctly. It was just being killed after the build completed.
The fix is to set the value of the BUILD_ID environment variable (JENKINS_NODE_COOKIE for pipeline, like in my situation) to "dontKillMe".
For example:
pipeline {
agent { /*set agent*/ }
environment {
JENKINS_NODE_COOKIE="dontKillMe"
}
stages { /*set build stages*/ }
}
See here for more details: https://wiki.jenkins.io/display/JENKINS/ProcessTreeKiller

Related

Rebuild Jenkins Pipeline after specified Time if failed

I am using Pipelines to build our projects in Jenkins.
If the build of a pipeline fails I'd like to automatically start a new build after a predefined period of time.
Thus it is not possible for me to use the retry("3") command in the Jenkinsfiles because, the way I understand it, there would be no possibility for a delay.
Sleep or something similar wont do because it will block an executor.
The Naginator Plugin seems to do exactly what I need but it doesn't seem to work with pipelines.
I tried implementing it in the Jenkinsfile like:
post {
always {
echo '-------post called-------'
retryBuild {
rerunIfUnstable(true)
retryLimit(2)
fixedDelay(60)
}
echo '-------post finished-------'
}
}
This does not throw any errors and both echos will be shown in the pipeline build. However it doesn't do anything either.
Does anyone have any experience with a similar problem or is there potentially even a way to use Naginator (or other plugins) with Jenkins pipelines?

Provide access to workspace files in log while Jenkins Build is running

We want to have a pipeline, that builds our application than pauses and after the built application was manually tested resumes and delivers the tested application.
So I came up with the idea of using a Input to pause the pipeline like this:
...
stage ("Build"){
// build application here and archive it as artefact
}
timeout(time:5, unit:'DAYS') {
input message:'Approve deployment?'
}
stage ("Deliver"){
// deliver the built application
}
The tester got 5 days to test the application then resumes the pipeline and it gets delivered.
My problem here is, while the build is still running, the tester can't yet access the artifact on the status page.
So is there any way to provide any kind of Download-Link in the log output, that points to the application file I archived in the build stage?
Or is there any other good way to achieve this build->pause->test->resume->deliver workflow in one single pipeline job?
Automation of the test in the pipeline is not an option, as the application needs to be manually flashed on some hardware.
This will get you to the artifacts list (you can add more after artifact if you want the link to point to the specific file):
...
timeout(time:5, unit:'DAYS') {
echo "Archive available for download: ${env.BUILD_URL}artifact"
input message:'Approve deployment?'
}
This requires the value of JENKINS_URL to be set in the system configuration: From your Jenkins home, click on Manage Jenkins --> Configure System and look for the Jenkins URL under Jenkins Location
If you don't have admin access to Jenkins, and JENKINS_URL is not set, you could fudge this with something like
https://known-jenkins-url/job/${JOB_NAME}/${BUILD_NUMBER}/artifact

Jenkins Shared Libraries context

I have a pipeline job which loads Jenkinsfile from git repository. My Jenkinsfile looks like this:
#!groovy
#Library('global-utils-lib') _
node("mvn") {
stage('build') {
checkout scm
}
stage('merge-request'){
mergeRequest()
}
}
global-utils-lib is shared library loaded in Global Pipeline Libraries from another git repo with following structure
vars/mergeRequest.groovy
mergeRequest.groovy:
def call() {
sh "ip addr"
def workspacePath = env.WORKSPACE
new File(workspacePath + "/file.txt").text
}
Job is run against docker container (docker plugin).
When I run this job then docker container is provisioned correctly and scm is downloaded but I get FileNotFoundException.
It looks like code from shared library is executed against jenkins master not slave:
presented IP comes from master
file is loaded correctly when I pass correct path to the scm on master
How can I run library code against slave? What I am missing?
It's generally not a good idea to try and do things like new File() instead of using existing Pipeline steps.
Your Pipeline script is interpreted and executed by the Jenkins master so, as you're seeing, the attempt to use the File API doesn't work as you might expect.
Sticking to Pipeline steps helps ensure that your pipeline is durable (i.e. survives restarts), is pausable, and doesn't block the execution thread, preventing parallel steps from working, for example.
In this case, the existing readFile step can be used.
I don't know how well the Docker Plugin interacts with Pipeline (though I imagine it should be transparent), and without knowing which agents have the "mvn" label, or whether you can reproduce this outside of a shared library, it's unclear why your sh step would appear to be running on the master.
The Docker Pipeline Plugin is explicitly designed for Pipeline, so it might give better results.

Jenkins pipeline parallel not exeucting

I'm trying to test out the parallel functionality for a Jenkins pipeline job, but for some reason the individual build steps of the parallel job never get passed off to an executor and processed. Normal single-threaded pipeline jobs have no issue processing. I tried restarting the Jenkins server in case some resources were locked up, but it did not help.
The full script I'm trying to execute is:
def branches = [:]
branches["setup"] = {node("nsetup") {
echo "hello world"
}}
parallel branches
I have only one node, the master, and it has 5 available executors. It is configured to "use as often as possible". I'm pretty new to Jenkins and setting up a server for the first time, so maybe there's something I missed in the configuration that isn't related to the job.
Does anybody have any suggestions?
And 2 minutes after I post I figure it out! Every time.
Turns out I just didn't have any idea how the "node" command really works. By specifying a parameter in the parentheses, it was preventing it from releasing to an executor. I'm guessing that must tell it to try executing on a certain node matched by label, and I was using it like it was some random logging field. Oops!

Access Jenkins build log within build script

How would you go about accessing contents of the build (console) log from within a running build script?
I have a deploy script that runs, logs into a series of servers and runs scripts on those servers. I need to obtain certain output from some of those remote scripts and use them later in the build process and also in the completion email.
You can do something like this in the Post Build section, but I don't think you can do it earlier in the job. With the groovy post build plugin you can get information from the console log:
if(manager.logContains("text to find")) {
do something
}

Resources