Jenkins Pipeline: Are agents required to utilize Jenkinsfile? - jenkins

I am investigating the use of Jenkins Pipeline (specifically using Jenkinsfile). The context of my implementation is that I'm deploying a Jenkins instance using Chef. Part of this deployment may include some seed jobs, which will pull job configurations from source control (Jenkinsfile), to automate creation of our build jobs via Chef.
I've investigated the Jenkins documentation for both Pipeline as well as Jenkinsfile, and it seems to me that in order to use Jenkins Pipeline agents are required to be configured and set up in addition to Jenkins Master.
Am I understanding this correctly? Must Jenkins agents exist in order to use Jenkins Pipeline's Jenkinsfile? This specific line in the Jenkinsfile documentation leads me to believe this to be true:
Jenkinsfile (Declarative Pipeline)
pipeline {
agent any
stages {
stage('Build') {
steps {
echo 'Building..'
}
}
stage('Test') {
steps {
echo 'Testing..'
}
}
stage('Deploy') {
steps {
echo 'Deploying....'
}
}
}
}
The Declarative Pipeline example above contains the minimum necessary
structure to implement a continuous delivery pipeline. The agent
directive, which is required, instructs Jenkins to allocate an
executor and workspace for the Pipeline.
Thanks in advance for any Jenkins guidance!

The 'agent' part of the pipeline is required however this does not mean that you are required to have an external agent in addition to your master. If all you have is the master this pipeline will execute on the master. If you have additional agents available the pipeline would execute on whichever agent happens to be available when you run the pipeline.

If you go into
Manage Jenkins -> Manage Nodes and Clouds, you can see 'Master' itself is treated as one of the Default Nodes. With declarative format agent anyindicates any available agent which (including 'Master' as well from node configuration see below).
In case if you configure any New node, this can then be treated as New Agent in the pipeline agent any can be replaced by agent 'Node_Name'
You may can refer this LINK which give hint on Agent, Node and Slave briefly.

Related

Jenkins - Execute Pipeline from Specific folder

I have a main pipeline in jenkins that will checkout the code , compile, test and build, then push image to docker. This is the high level of CI pipeline that I have. Say job name "MainJobA"
I need to create a new job , just for JavaDoc generation. For that i have created a new script in Git Repo and configured the same in Pipeline job.
Now i need to execute this sub job of javadoc generation and publishing the html reports from the workspace of "MainJobA" . I need to run the SubJobA's pipeline stages from
/home/jenkins/workspace/MainJobA
How can i achieve this?
There is build step exist in jenkins declarative pipelines.
Use it like:
pipeline {
agent any
stages {
stage ("build") {
steps {
build 'Pipeline_B'
}
}
}
}

Triggering vSphere build via Jenkins pipeline agent

My goal is to set up a declarative pipeline job which automatically triggers the vSphere plugin to create a VM on which the build and test runs in a clean environment.
I've configured the vSphere Cloud Plugin in Jenkins' global settings to build slaves with label "appliance-slave", and this does trigger for freestyle jobs with "Restrict where this project can be run" set to that label. However, the following example pipeline never triggers the vSphere plugin (based on tailing the Jenkins log):
pipeline {
agent {
label 'appliance-slave'
}
stages {
stage('Test') {
steps {
sh "hostname && hostname -i"
}
}
}
}
I've searched the documentation without any luck. Is there some configuration option or alternate agent declaration that I'm missing that would allow this?
Finally resolved the problem; the issue was that I needed to go in to the actual slave configuration and set up the slave there. The vSphere plugin modifies the slave configuration page to allow exactly what I was trying to do: shutting down and reverting the VM once the build is complete.

Do I have to use a node block in Declarative Jenkins pipelines?

I was reading about the best practices of a Jenkins pipeline.
I have created a declarative pipeline which is not executing parallel jobs and I want to run everything on the same slave.
I use:
agent {
label 'xxx'
}
The rest of my pipeline looks like:
pipeline {
agent {
label 'xxx'
}
triggers {
pollSCM pipelineParams.polling
}
options {
buildDiscarder(logRotator(numToKeepStr: '3'))
}
stages {
stage('stage1') {
steps {
xxx
}
}
stage('stage2') {
steps {
xxx
}
}
}
post {
always {
cleanWs()
}
failure {
xxx"
}
success {
xxx
}
}
}
Now I read the best practices here.
Point 4 is telling:
Do: All Material Work Within a Node
Any material work within a pipeline should occur within a node block.
Why? By default, the Jenkinsfile script itself runs on the Jenkins
master, using a lightweight executor expected to use very few
resources. Any material work, like cloning code from a Git server or
compiling a Java application, should leverage Jenkins distributed
builds capability and run an agent node.
I suspect this is for scripted pipelines.
Now my questions are:
Do I ever have to create a node inside a stage in a declarative pipeline (it is possible) or do I have to use agent inside the stage when I want to run my stage on another specific agent?
My current pipeline has defined a label which is on 4 agents. But my whole pipeline is always executed on one agent (what I want) but I would suspect it's executing stage1 on slaveX and maybe stage2 on slaveY. Why is this not happening?
The documentation is quite misleading.
What the documentation is suggesting is to take advantage of distributed builds. Distributed builds activated either by using the agent or node block.
The agent should be used when you want to run the pipeline almost exclusively on one node. The node block allows for more flexibilty as it allows you to specify where a granular task should be executed.
If you running the pipeline on some agent and you encapsulate a step with node with the same agent, there won't be any effect execpt that a new executor will be allocated to the step encapsulated with node.
There is no obvious benefit in doing so. You will simply be consuming executors that you don't need.
In conclusion, you are already using distributed builds when using agent and this is what the documentation is vaguely recommending.

Jenkins declarative pipeline: What workspace is associated with a stage when the agent is set only for the pipeline?

Here is an example of declarative pipeline where the agent is set for the pipeline but not set in the individual stages:
pipeline {
agent { node { label 'linux' } }
stages {
stage('Checkout') {
steps {
checkout scm
}
}
stage('Build') {
steps {
sh 'make'
}
}
}
}
Documentation I've found about scripted pipeline makes it clear that a single workspace will be used within a single node block but multiple node blocks might be allocated multiple workspaces, therefore it is necessary to stash between those steps, use the External Workspace Plugin, etc. if you want to be sure of what's in the workspace between steps.
I had a hard time finding documentation about workspace guarantees for declarative pipeline. What guarantees about workspaces exist for this example?
I believe I encountered two stages executing in different workspaces during testing of a similar pipeline but I'm not sure that's what was happening. I'd really like to avoid needing to stash my checkout prior to my build step or use the External Workspace plugin so I was hoping there'd be a way to force all my stages to run all in one workspace/on one node.
The Pipeline code presented should only create a single workspace and run all stages in it. Unless you create a new agent directive in any of your stages it will not utilize another node or workspace.
btw, checkout scm happens automatically at the beginning of the Pipeline with Declarative so you don't need to explicitly call that out.
i'm 70% sure--based on anecdotal evidence--that you will always get the same workspace on the same node in different stages of a declarative pipeline if you specify a node at the top level and never override it, the way you're doing.
i reserve the right to adjust my confidence level as i receive feedback on this answer. :D

Visualizing build steps in Jenkins pipeline

In my Jenkins pipeline, I trigger several other jobs using the build step and pass some parameters to it. I'm having issues visualizing the different jobs I've triggered in addition to my pipeline. I have set up the Jenkins Delivery Pipeline plugin but the documentation for it is extremely vague and I have only been able to visualize the steps within my pipeline, despite tagging the jobs with both a stage and task name.
Example:
I have two jobs in Jenkins as pipelines/workflow jobs with the following pipeline script:
Job Foo:
stage('Building') {
println 'Triggering job'
build 'Bar'
}
Job Bar:
node('master') {
stage('Child job stage') {
println 'Doing stuff in child job'
}
}
When visualizing this with the Jenkins Pipeline Delivery plugin, I only get this:
How do I make it also show the stage in job Bar in a separate box?
Unfortunately, this use case is currently not supported in the Delivery Pipeline plugin version 1.0.0. The delivery pipeline plugin views for Jenkins pipelines are only rendering what is contained within one pipeline definition at this point. This feature request is tracked in JENKINS-43679.

Resources