Filter out the stage or inside the stage - jenkins

I havent had much luck with google due to the phrasing of the question so I apologize for the very basic question.
If i have 3 build stages setup in my jenkinsfile and stages 1 and 3 always run, but 2 only runs if its a PR, do I make it keep the stage but not run the command, or just filter out the whole stage?
stage {
if (env.isPR) {
sh()
or more like
if (env.isPR) {
stage {
sh()

If you filter out the whole stage, then after a non-PR build, the stage visualization on the build page will remove that PR stage (for all builds, even the ones which in fact did run the PR stage).
So, from a visualization perspective, I would suggest that you retain the stage.
stage {
if (env.isPR) {
sh()

Related

run multiple steps in Jenkins pipeline

In my project, I have requirements to run multiple steps.
I followed this guideline : Jenkins Guide
Here are the code :
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'echo "Hello World"'
sh '''
echo "Multiline shell steps works too"
ls -lah
'''
}
}
}
}
Do I have other alternative to handle the multiple steps in Jenkins pipeline ? I also thinking to use script inside steps I am not sure that's also good way to do
I am trying to understand what is the best practice to run multiple steps
You need to better define what it is you intend to. Do. A starting example is here .
You need to understand the meaning of stage and step.
A stage block defines a conceptually distinct subset of tasks performed through the entire Pipeline (e.g. "Build", "Test" and "Deploy" stages)
Step: A single task. Fundamentally, a step tells Jenkins what to do at a particular point in time (or "step" in the process).
You need to think of both stage and step as atomic units. Eg: Deploy is an atomic activity, but may consist of numerous steps, some of which might have multiple instructions/commands, copy artifact (to different targets, copy data, , launch app. etc.
This Tutorial may be useful, too.
Also, review best practices

Jenkins pipeline: how to trigger another job and wait for it without using an extra agent/executor

I am trying to setup various Jenkins pipelines whose last stage is always to run some acceptance tests. To cut a long story short, acceptance tests and test data (much of which is shared) for all products are checked into the same repository which is about 0.5 GB in size. It therefore seemed best to have a separate job for the acceptance tests and trigger it with a "build" step from each pipeline with the appropriate arguments to run the relevant tests. (It is also sometimes useful to rerun these tests without rebuilding the product)
stage('AcceptanceTest') {
steps {
build job: 'run-tests', parameters: ..., wait: true
}
}
So far I have seen that I can either:
trigger the job as normal. But this uses an extra agent/executor,
there doesn't seem to be a way to tell it to reuse the one from the
build (main pipeline). Both pipelines start with "agent { label 'master' }" but that
seems to mean "allocate a new agent on a node matching master".
trigger the job with the "wait: false" argument. This doesn't
block an executor but it does mean I can't report the results of the
tests in the main pipeline. It gives the impression that the test
stage has always succeeded.
Is there a better way?
I seem to have solved this, by adding "agent none" at the top of my main pipeline and moving "agent { label 'master' }" into the build stage. I can then leave my 'AcceptanceTest' stage without an agent and define it in the 'run-tests' job as before. I was under the impression from the docs that if you put agents in stages then all stages needed to have one, but it seems not to be the case. Which is lucky for this usecase...
I don't think that there's another way for declarative pipeline.
On the other hand for scripted pipeline you could execute this outside of node {} and it would just hold onto one executor on master releasing the one on slave.
stage("some") {
build job: 'test'
node {
...
Related question: Jenkis - Trigger another pipeline job in same machine - without creating new "Executor"

Jenkins 2 Declarative pipelines - Is it possible to run all stages within a node (agent any) but having some of them running without it?

I have a CD pipeline that requires user confirmation at some stages, so I would like to free up server resources while the pipeline is waiting for the user input.
pipeline {
agent any
stages {
stage ('Build Stage') {
steps {
...
}
}
stage ('User validation stage') {
agent none
steps {
input message: 'Are you sure you want to deploy?'
}
}
stage ('Deploy Stage') {
steps {
...
}
}
}
}
You can see above that I have a global agent any but in the User Validation Stage I added agent none.
Can someone confirm that this doing what I want (no agent/node is waiting for the user input)? I don't see how to verify it, nothing different in the execution log...
If not, how could I do it?
This will not work as you expect. You cannot specify agent any on the entire pipeline and then expect agent none to not occupy the executor.
To prove this, you can run this code as you have it, and while it is waiting at the input stage, go to your main jenkins page and look at the Build Executor Status. You will see there is an executor still running your job.
Next, switch your pipeline to agent none and add agent any to all the other steps (besides your input step) and do the same test. You can see that while waiting at the input stage, none of the executors are occupied.
As to your question about different workspaces on different nodes... Assuming you are using code from an SCM, it will be checked out on every new node, so that isn't a concern. The only thing you need to worry about is artifacts you created in each stage.
It is not safe to "hope" that you will stay on the same node, though Jenkins will "try" to keep you there. But even then, there is not a guarantee that you will get the same workspace directory.
The correct way to handle this is to stash all the files that you may have created or modified that you will need in later stages. Then in the following stages, unstash the required files. Never assume files will make it between stages that have their own node declaration.

Jenkins declarative pipeline: What workspace is associated with a stage when the agent is set only for the pipeline?

Here is an example of declarative pipeline where the agent is set for the pipeline but not set in the individual stages:
pipeline {
agent { node { label 'linux' } }
stages {
stage('Checkout') {
steps {
checkout scm
}
}
stage('Build') {
steps {
sh 'make'
}
}
}
}
Documentation I've found about scripted pipeline makes it clear that a single workspace will be used within a single node block but multiple node blocks might be allocated multiple workspaces, therefore it is necessary to stash between those steps, use the External Workspace Plugin, etc. if you want to be sure of what's in the workspace between steps.
I had a hard time finding documentation about workspace guarantees for declarative pipeline. What guarantees about workspaces exist for this example?
I believe I encountered two stages executing in different workspaces during testing of a similar pipeline but I'm not sure that's what was happening. I'd really like to avoid needing to stash my checkout prior to my build step or use the External Workspace plugin so I was hoping there'd be a way to force all my stages to run all in one workspace/on one node.
The Pipeline code presented should only create a single workspace and run all stages in it. Unless you create a new agent directive in any of your stages it will not utilize another node or workspace.
btw, checkout scm happens automatically at the beginning of the Pipeline with Declarative so you don't need to explicitly call that out.
i'm 70% sure--based on anecdotal evidence--that you will always get the same workspace on the same node in different stages of a declarative pipeline if you specify a node at the top level and never override it, the way you're doing.
i reserve the right to adjust my confidence level as i receive feedback on this answer. :D

What is the difference between a node, stage, and step in Jenkins pipelines?

I'm trying to understand how to structure my Jenkins 2.7 pipeline groovy script. I've read through the pipeline tutorial, but feel that it could expand more on these topics.
I can understand that a pipeline can have many stages and each stage can have many steps. But what is the difference between a step(); and a method call inside a stage, say sh([script: "echo hello"]);. Should nodes be inside or outside of stages? Should the overall properties of a job be inside or outside a node?
Here is my current structure on an ubuntu master node:
#!/usr/bin/env groovy
node('master') {
properties([
[$class: 'BuildDiscarderProperty', strategy: [$class: 'LogRotator', numToKeepStr: '10']]
]);
stage 'Checkout'
checkout scm
stage 'Build'
sh([script: "make build"]);
archive("bin/*");
}
The concepts of node, stage and step are different:
node specifies where something shall happen. You give a name or a label, and Jenkins runs the block there.
stage structures your script into a high-level sequence. Stages show up as columns in the Pipeline Stage view with average stage times and colours for the stage result.
step is one way to specify what shall happen. sh is of a similar quality, it is a different kind of action. (You can also use build for things that are already specified as projects.)
So steps can reside within nodes, (if they don't, they are executed on the master), and nodes and steps can be structured into an overall sequence with stages.
It depends. Any node declaration allocates an executor (on Jenkins master or slave). This requires that you stash and unstash the workspace, as another executor does not have the checked out sources available.
Several steps of the Pipeline DSL run in a flyweight executor and thus do not require to be inside a node block.
This might be helpful for an example such as the following, where you need to allocate multiple nodes anyway:
stage("Checkout") {
checkout scm
}
stage("Build") {
node('linux') {
sh "make"
}
node('windows') {
bat "whatever"
}
}
stage("Upload") {
...
Another (maybe more realistic example) would be to allocate multiple nodes in parallel. Then there's no need to let the stage call be executed in another allocated executor (aka within node).
Your example looks good to me. There's no need to allocate multiple nodes within the single stages, as this would be only additional overhead.

Resources