Jenkins declarative pipeline, run the same job on multiple agents - jenkins

I have three slave nodes and they all have the label "general-slave".
Now this pipeline only select one slave randomly and run the job:
pipeline {
agent { label 'general-slave' }
stages {
...
}
}
How do I run the job on all three slaves? The stages are long so am trying to avoid repeating.
This might sound straight forward but I can't seem to find a good answer. Thanks!

Related

Building Jenkins job through slave machine which has more disk space

i have a main Jenkins pipeline job which calls other multiple sub jobs during build time.
Also i have 2 Jenkins slave machines, where slave1 has 100GB space left slave2 has 30GB space left.
But during build time Jenkins is using slave2 instead of slave1 which has more space compared to slave2.
How to configure Jenkins so that, it will use slave machine which has more space?
In the Jenkins pipeline you can mention where you want to run your job like below:
Scripted Pipeline
node('labelName'){
stage('...') {
...
}
}
Declarative Pipeline
pipeline {
agent {
label 'agentLabaleName'
}
stages {
stage('...') {
steps {
.....
}
}
}
}
more information can be found here

Running one particular job in parallel on Jenkins

I have multiple jobs set on Jenkins. There are only one executor on masters, there are many other slaves. I want to create a job which would run on a separate job queue or a separate executor concurrently. How can I achieve this in simplest way? Can it be achieved without modifying slaves?
The next thing is how can I achieve running on any new build for this job in parallel. Rest of the jobs set should not be disturbed and interfered.
As I can understand you need to configure a particular job to run on a slave node...
In order to achieve that you can 1st add a node to the Jenkins master. There you have a place to add a label to that slave node. The in the pipeline script of your job you can edit it as follows.
pipeline {
agent {
node {
label '<YOUR SLAVE NODE LABEL>'
}
}
}
In the latter part of the question, you have asked to run parallel jobs. Please refer to the parallel job documentation If you need to add sequential stages refer to the sequential stage documentation If you want to create a dynamic parallel job this link would help you.

Do I have to use a node block in Declarative Jenkins pipelines?

I was reading about the best practices of a Jenkins pipeline.
I have created a declarative pipeline which is not executing parallel jobs and I want to run everything on the same slave.
I use:
agent {
label 'xxx'
}
The rest of my pipeline looks like:
pipeline {
agent {
label 'xxx'
}
triggers {
pollSCM pipelineParams.polling
}
options {
buildDiscarder(logRotator(numToKeepStr: '3'))
}
stages {
stage('stage1') {
steps {
xxx
}
}
stage('stage2') {
steps {
xxx
}
}
}
post {
always {
cleanWs()
}
failure {
xxx"
}
success {
xxx
}
}
}
Now I read the best practices here.
Point 4 is telling:
Do: All Material Work Within a Node
Any material work within a pipeline should occur within a node block.
Why? By default, the Jenkinsfile script itself runs on the Jenkins
master, using a lightweight executor expected to use very few
resources. Any material work, like cloning code from a Git server or
compiling a Java application, should leverage Jenkins distributed
builds capability and run an agent node.
I suspect this is for scripted pipelines.
Now my questions are:
Do I ever have to create a node inside a stage in a declarative pipeline (it is possible) or do I have to use agent inside the stage when I want to run my stage on another specific agent?
My current pipeline has defined a label which is on 4 agents. But my whole pipeline is always executed on one agent (what I want) but I would suspect it's executing stage1 on slaveX and maybe stage2 on slaveY. Why is this not happening?
The documentation is quite misleading.
What the documentation is suggesting is to take advantage of distributed builds. Distributed builds activated either by using the agent or node block.
The agent should be used when you want to run the pipeline almost exclusively on one node. The node block allows for more flexibilty as it allows you to specify where a granular task should be executed.
If you running the pipeline on some agent and you encapsulate a step with node with the same agent, there won't be any effect execpt that a new executor will be allocated to the step encapsulated with node.
There is no obvious benefit in doing so. You will simply be consuming executors that you don't need.
In conclusion, you are already using distributed builds when using agent and this is what the documentation is vaguely recommending.

Jenkins declarative pipeline: What workspace is associated with a stage when the agent is set only for the pipeline?

Here is an example of declarative pipeline where the agent is set for the pipeline but not set in the individual stages:
pipeline {
agent { node { label 'linux' } }
stages {
stage('Checkout') {
steps {
checkout scm
}
}
stage('Build') {
steps {
sh 'make'
}
}
}
}
Documentation I've found about scripted pipeline makes it clear that a single workspace will be used within a single node block but multiple node blocks might be allocated multiple workspaces, therefore it is necessary to stash between those steps, use the External Workspace Plugin, etc. if you want to be sure of what's in the workspace between steps.
I had a hard time finding documentation about workspace guarantees for declarative pipeline. What guarantees about workspaces exist for this example?
I believe I encountered two stages executing in different workspaces during testing of a similar pipeline but I'm not sure that's what was happening. I'd really like to avoid needing to stash my checkout prior to my build step or use the External Workspace plugin so I was hoping there'd be a way to force all my stages to run all in one workspace/on one node.
The Pipeline code presented should only create a single workspace and run all stages in it. Unless you create a new agent directive in any of your stages it will not utilize another node or workspace.
btw, checkout scm happens automatically at the beginning of the Pipeline with Declarative so you don't need to explicitly call that out.
i'm 70% sure--based on anecdotal evidence--that you will always get the same workspace on the same node in different stages of a declarative pipeline if you specify a node at the top level and never override it, the way you're doing.
i reserve the right to adjust my confidence level as i receive feedback on this answer. :D

Jenkins Pipeline Plugin execute two pipelines in parallel and make downstream pipeline wait

We are using Jenkins and the Pipeline plugin for CI/CD. We have two pipelines that we need to run parallel, and there is a downstream pipeline which should trigger ONLY if the two upstream pipelines both finish and are successful.
P1--
| -- P3
P2--
Basically P3 should run only when P1 and P2 are finished and successful and not depend on just one of them.
Is there a way to achieve this? We are using 2.5 version of the plugin.
Since stages only run if previous stages run successfully, and since you can execute other pipelines via build, and since there is a magical instruction called parallel, I think this might do it:
pipeline {
agent { label 'docker' }
stages {
stage("build_p1_and_p2_in_parallel") {
steps {
parallel p1: {
build 'p1'
}, p2: {
build 'p2'
}
}
}
stage("build_p3_if_p1_and_p2_succeeded") {
steps {
build 'p3'
}
}
}
}
Use the "Snippet Generator" embedded in your jenkins instance to figure out what the argument to build should be. If it's another pipeline at the same level as the top level Jenkinsfile, you could just reference it by job name. Caveat: I've used parallel, but never build within parallel, but it seems like it should work.
You can try and wrap the pipelines jobs with MultiJob plugin that can implement the logic that you require as 2 jobs inside a phase.

Resources