I would like to execute my job in a remote node passing the domain name as node arg.
Someone knows how to build this jenkinsfile?
I can't execute on the below way
node('jenkins.mydomain.com') {
build 'remote_exec'
}
There are actually two major issues within your two lines of code :-)
node('jenkins.mydomain.com') {
This will build on a build agent with the label jenkins.mydomain.com. If you have only one build agent with this label given, this should work. But it's not the hostname! (Note: I'm not entirely sure if dots are allowed, but you could call it also whateverserver).
So this would allocate an executor slot (to run the code within the closure) on a build agent matching the given label...
build 'remote_exec'
and then trigger yet another build for the job called remote_exec. This job (assuming it exists and you don't have this as a third issue^^) will then be built on an agent matching its own labels, ignoring the one given in the node(label) step.
If you want that the remote_exec job runs on a specific build agent only, then add the node step there!
Related
I have several agents configured in Jenkins.
For one of my pipeline jobs execution, I wish to choose between two of my agents i.e. MYHOST11-ANSIBLE-SLAVE and MYHOST22-ANSIBLE-SLAVE which ever is available. Thus, if MYHOST11-ANSIBLE-SLAVE is unavailable my Jenkins pipeline job should switch to using MYHOST22-ANSIBLE-SLAVE
Can you please suggest what changes do I need in my below pipeline code ?
pipeline {
agent {
node {
label 'MYHOST11-ANSIBLE-SLAVE'
}
}
stages {
stage('Precheck') {
steps {
sh "echo Im from Jenkins>/tmp/jenkinsmoht.txt"
Note: I want my pipeline to choose only between the two agents I mentioned as only they have ansible with my pipeline invokes.
Other agents don't have ansible thus my pipeline would fail.
I think you have your nomenclature strategy a little skewed...
Let's say you have two nodes, named "HOST11" and "HOST22". Those two have ansible installed. Other nodes (eg "HOST33") do not. Those are the names of the Nodes, reflective of the Hosts the agents run on.
You want to configure your Nodes (/computer/<NodeName>/configure) with "labels" according to their characteristics, in this case "ansible", thus creating a "pool of nodes" of similar configuration.
You then use the label of the characteristic ("ansible") to assign the job to the pool of servers with that corresponding label (and characteristic). By assigning labels to nodes, you can specify the resources you want to use for specific jobs, and set up graceful queuing for your jobs.
eg:
agent {
node {
label 'ansible'
}
Jenkins will then pick the first available node with the matching label and run there, unless that node is not available. It will then try the next. If none are available, the job will remain queued.
If you choose to use a Host Name (which in truth is also just a "label", then you can only run on that one node).
Another distinction: "Available Node" in Jenkins implies on-line. If all executors are busy, it is still "available". Jenkins jobs are "sticky", it will wait until a node it has previously run on has an available executor. That can also result in the first node being overloaded. If that's a problem, look to install the "Least Load" plugin which will act as a load balancer using various criteria.
See this post to further constrain jobs using multiple labels.
ps: If your nodes are similar but not identical, you may be use the "Node Properties" or the "Slave Setup" plugin to make them transparently compatible to your job (eg: set VAR to different values/paths).
Case:
I have 3 machine (A,B,C) for slave (sharing the same node label e.g 'build')
I have a pipeline which may trigger different downstream job. And i need to make sure that all the job and downstream job using same node (for sharing some file etc.). How i can do that?
a) I pass the node label to downstream but i am not sure if the downstream will take the same node.(parent job using slave "A" and i pass the node label 'build' to downstream job but maybe in downstream job it take slave 'B')
b) is that some way to get the runtime slave when the pipeline is executing, when i pass this slave name to downstream?
or is that any better way to do that?
I advice you to try NodeLable Parameter Plugin.
Once installed, check 'This project is parametrized' option and select 'node' from 'Add Parameter' drop down.
It will populate all nodes as drop down while building job with parameters.
It also have some other options which may help you.
Most important question to me would be: Why do they need to run on the very same node?
Anyway. One way to achieve this would be to retrieve the name of the node in the node block in the first pipeline, like (CAUTION: was not able to verify code written below):
// Code for upstream job
#NonCPS
def getNodeName(def context) {
context.toComputer().name
}
def nodeName = 'undefined'
node('build') {
nodeName = steps.getContext(FilePath)
}
build job: 'downstream', parameters: [string(name: 'nodeName', value: nodeName)]
In the downtstream you use that string parameter as input to your node block - of course you should make sure that the downstream actually is parameterized in the first place having a string parameter named nodeName:
node(nodeName) {
// do some stuff
}
Even having static agents, workspaces are eventually cleaned up, so don't rely on existence of files in the workspace on your builds.
Just archive whatever you need in the upstream job (using the archive step) and then use Copy Artifact Plugin in downstream jobs to get what you need there. You'll probably need to parameterize downstream jobs to pass them the reference to the upstream artifact(s) you need (there are plenty of selectors available in the Copy Artifact plugin that you can play with to achieve what you want.
If you are triggering child jobs manually from pipeline, then you can use syntax as this to pass the specific node label
build job: 'test_job', parameters: [[$class: 'LabelParameterValue', name: 'node', label: 'tester1']]
build job: 'test_job', parameters: [[$class: 'LabelParameterValue', name: 'node', label: 'tester2']]
current label of node you should be able to get this way ${env.NODE_NAME}"
found at How to trigger a jenkins build on specific node using pipeline plugin?
ref. to docs- https://jenkins.io/doc/pipeline/steps/pipeline-build-step/
But yes, if you want to manipulate with some files from this job in other jobs, then you will need to use eg. mentioned copy artifacts plugin, because workspaces of the jobs are independent and each job will have different content.
How can i add the little blue tags to stage view in a Jenkins Pipeline?
I was searching for this as well and found the following after reading Hatim's answer:
The line that was supposed to show the node label is commented out:
source
The referenced issue JENKINS-33290 is Resolved with the last comment:
Resolved by removing the functionality, since a correct implementation imposes unacceptable complexity and overhead.
So I'm afraid it's not coming back soon and all those screenshots online are outdated.
Is the name or label of the node used.
Please refer to this
https://jenkins.io/doc/pipeline/steps/workflow-durable-task-step/
node: Allocate node
Allocates an executor on a node (typically a slave) and runs further code in the context of a workspace on that slave.
label
Computer name, label name, or any other label expression like linux && 64bit to restrict where this step builds. May be left blank, in which case any available executor is taken.
In this case, the stage is executed in the master, if you configure your jenkins pipeline to be executed in differents plateforms (master-slaves), then you will be able to see the label of your slaves environement.
With DSL I can do something like:
bnumber = build.environment.get("BUILD_NUMBER")
build("Compile.Net", BUILD_NUMBER: bnumber)
which is great. It seems to set the BUILD_NUMBER var of the downstream job. However the display name is still with the incremented automatically number and also if I manually start the job after, it will have incremented build number from the wrong one (not the one that has been passed as a parameter). I guess there is another action needed as a shell script or something to set the BUILD_NUMBER and Displayname and save it incremented in the configuration as nextBuildNumber file. Perhaps this PlugIn could help:
https://wiki.jenkins-ci.org/display/JENKINS/Next+Build+Number+Plugin
The question is if there is a better way of doing it or I should continue to work in the same direction? Is there a better way of setting the build number of a downstream job to be the same as the build flow job?
I wan't to be able to temporary exclude a specific job from running on a node in a label group.
jobA, jobB, jobC are tied to run on label general
nodeA,nodeB,nodeC have the label general on them.
Let's say that jobA starts to fail consistently on nodeA.
The only solutions that I see today are taking nodeA offline for all jobs or reconfigure many jobs or nodes which is pretty time consuming. We are using JOB-DSL to configure the jobs so changing in the job configuration requires a checkin.
An ideal situation for us would be to have a configuration on the node:
Exclude job with name: jobA
Is there some easy way to configure that jobA should temporarily only run on nodeB and node C and that jobB/C should still run on all nodes in label general?
Create a parameterized job to run some job-dsl configuration. Make one of the parameters a "Choice" listing the job names that you might want to change.
Another parameter would select a label defining the node(s) you want to run the job on. (You can have more than one label on a node).
The job-dsl script then updates the job label.
This groovy script will enable/disable all jobs in a folder:
// "State" job parameter (choice, DISABLED|ENABLED)
def targetState = ('DISABLED'.equalsIgnoreCase(State))
// "Folder" job parameter (choice or free-text)
def targetFolderPath = Folder.trim()
def folder = findFolder(jenkins, targetFolderPath)
println "Setting all jobs in '${folder.name}' to '${targetState}'"
for (job in folder.getAllJobs()) {
job.disabled = targetState
println "updated job: ${job.name}"
}
I just came across the same issue, I want the job to run on the device with lable, say "lableA", but do not want it to run on device with lable "lableB".
We may try this:
node(nodeA && !nodeB) {
}
Refer to: https://www.jenkins.io/doc/pipeline/steps/workflow-durable-task-step/#node-allocate-node
you can also use NodeLabel Parameter Plugin in jobA. Using this plugin you can define nodes on which the job should be allowed to be executed on. Just add parameter node and select all nodes but nodeA.
https://wiki.jenkins-ci.org/display/JENKINS/NodeLabel+Parameter+Plugin
For a simple quick exclude, what I think the original question refers to as "The only solutions that I see today are ... reconfigure ... jobs or nodes" see this other answer: https://stackoverflow.com/a/29611255/598656
To stop using a node with a given label, one strategy is to simply change the label. E.g. suppose the label is
BUILDER
changing the label to
-BUILDER
will preserve information for the administrator but any job using BUILDER as the label will not select that node.
To allow a job to run on the node, you can change the node selection to
BUILDER||-BUILDER
A useful paradigm when shuffling labels around.
NOTE that jobs may still select using the prior label for a period of time.