Jenkins pipeline build stuck forever if node doesn't exist - jenkins

I have a Jenkins pipeline like:
node("slave1"){
echo "Building very very complicated things"
}
If the node "slave1" does not exist in my Jenkins setup, the build is stuck forever.
I know I could use timeout keyword and wrap the node command. However, this is not good since timeout can happen because of inexistent node or long-taking build. That's not really a solution.
Any calls that I can do to check if a node exists or not?
I use Jenkins 2.32.2 and pipeline plugin version 2.1.

You could use timeout like you say, only having the node do something trivial to test the water. If that succeeds you'll know "slave1" exists and can call node again to do the actual build.

Related

Rebuild Jenkins Pipeline after specified Time if failed

I am using Pipelines to build our projects in Jenkins.
If the build of a pipeline fails I'd like to automatically start a new build after a predefined period of time.
Thus it is not possible for me to use the retry("3") command in the Jenkinsfiles because, the way I understand it, there would be no possibility for a delay.
Sleep or something similar wont do because it will block an executor.
The Naginator Plugin seems to do exactly what I need but it doesn't seem to work with pipelines.
I tried implementing it in the Jenkinsfile like:
post {
always {
echo '-------post called-------'
retryBuild {
rerunIfUnstable(true)
retryLimit(2)
fixedDelay(60)
}
echo '-------post finished-------'
}
}
This does not throw any errors and both echos will be shown in the pipeline build. However it doesn't do anything either.
Does anyone have any experience with a similar problem or is there potentially even a way to use Naginator (or other plugins) with Jenkins pipelines?

Jenkins Blue Ocean Plugin to show the node the stage ran on

Right now, i have no idea which node the stage was run on unless i create a step to execute hostname. its cumbersome
Is there way to see which node the Stage ran on?
You can also use ${env.NODE_NAME}, don't need to exec hostname.
With a scripted pipeline at least, there's no guaranteed one-to-one relationship between a node and a stage. You can actually have stages that work on multiple nodes, sequentially, or even in parallel.
So I doubt you can find a plugin that will render the pipeline in BO with that piece of information.
You could always print out (echo) the node name as the first step of your stage, so that it shows readily in the BO log at the bottom.
Possibly, you could even extend the stage command with a wrapper dsl command myStage that prints that out first thing as well (so DRY).
That custom dsl command could also set (prefix or suffix) your stage name with the node name.

How to execute groovy/java code in the context of a jenkins-pipeline slave node block?

In this snippet:
stage('build') {
node ('myslave') {
git(url: 'git#hostname:project.git')
println(InetAddress.getLocalHost().getHostName())
}
}
The git step is executed correctly and checks out code into node's workspace.
But why do I get Masters' hostname when executing the second command?
For example, this is not working also in the context of a node() {}
new File("${WORKSPACE}).listFiles()
Which does not actually iterate the ${WORKSPACE} folder
All Groovy code in an Pipeline script is executed on the master. I'm been unable to find any way to execute a generic groovy code on the slave, not due to lack of functionality in the Jenkins core, but problems with Pipeline groovy and serialisation of objects. Found this related question which addresses remoting in groovy.
It is however possible to do file operations on the slave side, see this answer for example how you can access files on the slave.

Jenkins how to create pipeline manual step

Prior Jenkins2 I was using Build Pipeline Plugin to build and manually deploy application to server.
Old configuration:
That works great, but I want to use new Jenkins pipeline, generated from groovy script (Jenkinsfile), to create manual step.
So far I came up with input jenkins step.
Used jenkinsfile script:
node {
stage 'Checkout'
// Get some code from repository
stage 'Build'
// Run the build
}
stage 'deployment'
input 'Do you approve deployment?'
node {
//deploy things
}
But this waits for user input, noting that build is not completed. I could add timeout to input, but this won't allow me to pick/trigger a build and deploy it later on:
How can I achive same/similiar result for manual step/trigger with new jenkins-pipeline as prior with Build Pipeline Plugin?
This is a huge gap in the Jenkins Pipeline capabilities IMO. Definitely hard to provide due to the fact that a pipeline is a single job. One solution might be to "archive" the workspace as an "artifact" (tar and archive **/* as 'workspace.tar.gz'), and then have another pipeline copy the artifact and and untar it into the new workspace. This allows the second pipeline to pickup where the previous one left off. Of course there is no way to gauentee that the second pipeline cannot be executed out of turn or more than once. Which is too bad. The Delivery Pipeline Plugin really shines here. You execute a new pipeline right from the view - instead of the first job. Anyway - not much of an answer - but its the path I'm going to try.
EDIT: This plugin looks promising:
https://github.com/jenkinsci/external-workspace-manager-plugin/blob/master/doc/PIPELINE_EXAMPLES.md

Jenkins pipeline parallel not exeucting

I'm trying to test out the parallel functionality for a Jenkins pipeline job, but for some reason the individual build steps of the parallel job never get passed off to an executor and processed. Normal single-threaded pipeline jobs have no issue processing. I tried restarting the Jenkins server in case some resources were locked up, but it did not help.
The full script I'm trying to execute is:
def branches = [:]
branches["setup"] = {node("nsetup") {
echo "hello world"
}}
parallel branches
I have only one node, the master, and it has 5 available executors. It is configured to "use as often as possible". I'm pretty new to Jenkins and setting up a server for the first time, so maybe there's something I missed in the configuration that isn't related to the job.
Does anybody have any suggestions?
And 2 minutes after I post I figure it out! Every time.
Turns out I just didn't have any idea how the "node" command really works. By specifying a parameter in the parentheses, it was preventing it from releasing to an executor. I'm guessing that must tell it to try executing on a certain node matched by label, and I was using it like it was some random logging field. Oops!

Resources