Do I have to use a node block in Declarative Jenkins pipelines? - jenkins

I was reading about the best practices of a Jenkins pipeline.
I have created a declarative pipeline which is not executing parallel jobs and I want to run everything on the same slave.
I use:
agent {
label 'xxx'
}
The rest of my pipeline looks like:
pipeline {
agent {
label 'xxx'
}
triggers {
pollSCM pipelineParams.polling
}
options {
buildDiscarder(logRotator(numToKeepStr: '3'))
}
stages {
stage('stage1') {
steps {
xxx
}
}
stage('stage2') {
steps {
xxx
}
}
}
post {
always {
cleanWs()
}
failure {
xxx"
}
success {
xxx
}
}
}
Now I read the best practices here.
Point 4 is telling:
Do: All Material Work Within a Node
Any material work within a pipeline should occur within a node block.
Why? By default, the Jenkinsfile script itself runs on the Jenkins
master, using a lightweight executor expected to use very few
resources. Any material work, like cloning code from a Git server or
compiling a Java application, should leverage Jenkins distributed
builds capability and run an agent node.
I suspect this is for scripted pipelines.
Now my questions are:
Do I ever have to create a node inside a stage in a declarative pipeline (it is possible) or do I have to use agent inside the stage when I want to run my stage on another specific agent?
My current pipeline has defined a label which is on 4 agents. But my whole pipeline is always executed on one agent (what I want) but I would suspect it's executing stage1 on slaveX and maybe stage2 on slaveY. Why is this not happening?

The documentation is quite misleading.
What the documentation is suggesting is to take advantage of distributed builds. Distributed builds activated either by using the agent or node block.
The agent should be used when you want to run the pipeline almost exclusively on one node. The node block allows for more flexibilty as it allows you to specify where a granular task should be executed.
If you running the pipeline on some agent and you encapsulate a step with node with the same agent, there won't be any effect execpt that a new executor will be allocated to the step encapsulated with node.
There is no obvious benefit in doing so. You will simply be consuming executors that you don't need.
In conclusion, you are already using distributed builds when using agent and this is what the documentation is vaguely recommending.

Related

How can load the Docker section of my Jenkins pipeline (Jenkinsfile) from a file?

I have multiple pipelines using Jenkinsfiles that retrieve a docker image from a private registry. I would like to be able to load the docker specific information into the pipelines from a file, so that I don’t have to modify all of my Jenkinsfiles when the docker label or credentials change. I attempted to do this using the example jenkinsfile below:
def common
pipeline {
agent none
options {
timestamps()
}
stages {
stage('Image fetch') {
steps{
script {
common = load('/home/jenkins/workspace/common/docker_image')
common.fetchImage()
}
}
}
}
With docker_image containing:
def fetchImage() {
agent {
docker {
label “target_node ”
image 'registry-url/image:latest'
alwaysPull true
registryUrl 'https://registry-url’
registryCredentialsId ‘xxxxxxx-xxxxxx-xxxx’
}
}
}
I got the following error when I executed the pipeline:
Required context class hudson.FilePath is missing Perhaps you forgot
to surround the code with a step that provides this, such as:
node,dockerNode
How can I do this using a declarative pipeline?
There are a few issues with this:
You can allocate a node only at top level
pipeline {
agent ...
}
Or you can use a per-stage node allocation like so:
pipeline {
agent none
....
stages {
stage("My stage") {
agent ...
steps {
// run my steps on this agent
}
}
}
}
You can check the docs here
The steps are supposed to be executed on the allocated node (or in some cases they can be executed without allocating a node at all).
Declarative Pipeline and Scripted Pipeline are two different things. Yes, it's possible to mix them, but scripted pipeline is meant to either abstract some logic into a shared library, or to provide you a way to be a "hard core master ninja" and write your own fully custom pipeline using the scripted pipeline and none of the declarative sugar.
I am not sure how your Docker <-> Jenkins connection is setup, but you would probably be better if you install a plugin and use agent templates to provide the agents you need.
If you have a Docker Swarm you can install the Docker Swarm Plugin and then in your pipeline you can just configure pipeline { agent { label 'my-agent-label' } }. This will automatically provision your Jenkins with an agent in a container which uses the image you specified.
If you have exposed /var/run/docker.sock to your Jenkins, then you could use Yet Another Docker Plugin, which has the same concept.
This way you can remove the agent configuration into the agent template and your pipeline will only use a label to have the agent it needs.

Jenkins 2 Declarative pipelines - Is it possible to run all stages within a node (agent any) but having some of them running without it?

I have a CD pipeline that requires user confirmation at some stages, so I would like to free up server resources while the pipeline is waiting for the user input.
pipeline {
agent any
stages {
stage ('Build Stage') {
steps {
...
}
}
stage ('User validation stage') {
agent none
steps {
input message: 'Are you sure you want to deploy?'
}
}
stage ('Deploy Stage') {
steps {
...
}
}
}
}
You can see above that I have a global agent any but in the User Validation Stage I added agent none.
Can someone confirm that this doing what I want (no agent/node is waiting for the user input)? I don't see how to verify it, nothing different in the execution log...
If not, how could I do it?
This will not work as you expect. You cannot specify agent any on the entire pipeline and then expect agent none to not occupy the executor.
To prove this, you can run this code as you have it, and while it is waiting at the input stage, go to your main jenkins page and look at the Build Executor Status. You will see there is an executor still running your job.
Next, switch your pipeline to agent none and add agent any to all the other steps (besides your input step) and do the same test. You can see that while waiting at the input stage, none of the executors are occupied.
As to your question about different workspaces on different nodes... Assuming you are using code from an SCM, it will be checked out on every new node, so that isn't a concern. The only thing you need to worry about is artifacts you created in each stage.
It is not safe to "hope" that you will stay on the same node, though Jenkins will "try" to keep you there. But even then, there is not a guarantee that you will get the same workspace directory.
The correct way to handle this is to stash all the files that you may have created or modified that you will need in later stages. Then in the following stages, unstash the required files. Never assume files will make it between stages that have their own node declaration.

Jenkins Pipeline: Are agents required to utilize Jenkinsfile?

I am investigating the use of Jenkins Pipeline (specifically using Jenkinsfile). The context of my implementation is that I'm deploying a Jenkins instance using Chef. Part of this deployment may include some seed jobs, which will pull job configurations from source control (Jenkinsfile), to automate creation of our build jobs via Chef.
I've investigated the Jenkins documentation for both Pipeline as well as Jenkinsfile, and it seems to me that in order to use Jenkins Pipeline agents are required to be configured and set up in addition to Jenkins Master.
Am I understanding this correctly? Must Jenkins agents exist in order to use Jenkins Pipeline's Jenkinsfile? This specific line in the Jenkinsfile documentation leads me to believe this to be true:
Jenkinsfile (Declarative Pipeline)
pipeline {
agent any
stages {
stage('Build') {
steps {
echo 'Building..'
}
}
stage('Test') {
steps {
echo 'Testing..'
}
}
stage('Deploy') {
steps {
echo 'Deploying....'
}
}
}
}
The Declarative Pipeline example above contains the minimum necessary
structure to implement a continuous delivery pipeline. The agent
directive, which is required, instructs Jenkins to allocate an
executor and workspace for the Pipeline.
Thanks in advance for any Jenkins guidance!
The 'agent' part of the pipeline is required however this does not mean that you are required to have an external agent in addition to your master. If all you have is the master this pipeline will execute on the master. If you have additional agents available the pipeline would execute on whichever agent happens to be available when you run the pipeline.
If you go into
Manage Jenkins -> Manage Nodes and Clouds, you can see 'Master' itself is treated as one of the Default Nodes. With declarative format agent anyindicates any available agent which (including 'Master' as well from node configuration see below).
In case if you configure any New node, this can then be treated as New Agent in the pipeline agent any can be replaced by agent 'Node_Name'
You may can refer this LINK which give hint on Agent, Node and Slave briefly.

Jenkins declarative pipeline: What workspace is associated with a stage when the agent is set only for the pipeline?

Here is an example of declarative pipeline where the agent is set for the pipeline but not set in the individual stages:
pipeline {
agent { node { label 'linux' } }
stages {
stage('Checkout') {
steps {
checkout scm
}
}
stage('Build') {
steps {
sh 'make'
}
}
}
}
Documentation I've found about scripted pipeline makes it clear that a single workspace will be used within a single node block but multiple node blocks might be allocated multiple workspaces, therefore it is necessary to stash between those steps, use the External Workspace Plugin, etc. if you want to be sure of what's in the workspace between steps.
I had a hard time finding documentation about workspace guarantees for declarative pipeline. What guarantees about workspaces exist for this example?
I believe I encountered two stages executing in different workspaces during testing of a similar pipeline but I'm not sure that's what was happening. I'd really like to avoid needing to stash my checkout prior to my build step or use the External Workspace plugin so I was hoping there'd be a way to force all my stages to run all in one workspace/on one node.
The Pipeline code presented should only create a single workspace and run all stages in it. Unless you create a new agent directive in any of your stages it will not utilize another node or workspace.
btw, checkout scm happens automatically at the beginning of the Pipeline with Declarative so you don't need to explicitly call that out.
i'm 70% sure--based on anecdotal evidence--that you will always get the same workspace on the same node in different stages of a declarative pipeline if you specify a node at the top level and never override it, the way you're doing.
i reserve the right to adjust my confidence level as i receive feedback on this answer. :D

Jenkins Pipeline Plugin execute two pipelines in parallel and make downstream pipeline wait

We are using Jenkins and the Pipeline plugin for CI/CD. We have two pipelines that we need to run parallel, and there is a downstream pipeline which should trigger ONLY if the two upstream pipelines both finish and are successful.
P1--
| -- P3
P2--
Basically P3 should run only when P1 and P2 are finished and successful and not depend on just one of them.
Is there a way to achieve this? We are using 2.5 version of the plugin.
Since stages only run if previous stages run successfully, and since you can execute other pipelines via build, and since there is a magical instruction called parallel, I think this might do it:
pipeline {
agent { label 'docker' }
stages {
stage("build_p1_and_p2_in_parallel") {
steps {
parallel p1: {
build 'p1'
}, p2: {
build 'p2'
}
}
}
stage("build_p3_if_p1_and_p2_succeeded") {
steps {
build 'p3'
}
}
}
}
Use the "Snippet Generator" embedded in your jenkins instance to figure out what the argument to build should be. If it's another pipeline at the same level as the top level Jenkinsfile, you could just reference it by job name. Caveat: I've used parallel, but never build within parallel, but it seems like it should work.
You can try and wrap the pipelines jobs with MultiJob plugin that can implement the logic that you require as 2 jobs inside a phase.

Resources