Manual Build Step in Jenkins Declarative Pipeline? - jenkins

This is a follow-up question to this previous post that doesn't seem like it was ever truly answered with more than a "this looks promising":
Jenkins how to create pipeline manual step.
This is a major functionality gap for CICD pipelines. The current "input step" of declarative (1.2.9) requires the whole pipeline to have to wait for the input step before the pipeline is completed (or have a time-out that won't allow you to re-trigger later). Depending on how agents are scoped it can also hold up an executor or require you to have to start up a new slave for every build step.
This is the closest I've come to a solution that doesn't eat up an executor (pipeline level "agent none" with agents defined in all stages described here: https://jenkins.io/blog/2018/04/09/whats-in-declarative/) but starting a new slave for every build step seems time wasteful and requires additional considerations for persisting your workspace. The final solution offered was to throw a "time-out" for the input, but this still doesn't work because then you can never move that build to stage later and will need to re-build.
Any solutions or suggestions here would be very appreciated.

If you are using Kubernetes Plugin for Jenkins Agent to run as containers in Kubernetes cluster, then there is a setting call idleMinutes.
idleMinutes Allows the pod to remain active for reuse until the configured number of minutes has passed since the last step was executed on it. Use this only when defining a pod template in the user interface.
There you can define your agent at pipeline level without defining it at all stages. (given your agent is designed to have the capabilities run in all stages). When it comes to user input stage, set agent to none at stage level so that it is not holding up the executor.

Related

Why declarative pipelines need to run on master if there are build executors available?

I'm using recent Jenkins version 2.286 and since this update there is an security hint: "You should set up distributed builds. Building on the controller node can be a security issue. See the documentation."
But I'm already doing so with three Jenkins nodes and I also fully understand the security implications.
The problem here is, that there are two jobs that need to run an the master, since they are the jobs that deploy those Jenkins nodes. That means I can not reduce the build executors to 0.
I've also tried using the Job Restrictions plugin to restrict which jobs can run on the master. This problem here is that all my jobs are waiting for the master queue do have a free slot available. I wonder why, because they all are declarative pipelines and define something like:
agent {
label 'some-different-node-label'
}
Which means they aren't really executed on the master node.
Questions here are:
Is this intentionally that all jobs require the master node before switching the agent?
Is there any configuration option to change that?
Is there a way to execute the deploy jobs on master, even if there aren't any executed defined (to bypass that behavior)?
Thanks.
With declarative pipelines the lightweight code checkout is done on the Master node to get a Jenkinsfile for that job. While this doesnt use an executor on the Master perhaps the Job Restriction Plugin is still blocking this (I havent used it before so cannot comment)
Also certain pipeline actions are delegated back to the Master node as well (e.g. the withAWSParameterStore step.
If you look at the console output for a Declarative pipeline job, you will see lots of output (mainly around library checkouts or git checkouts) before you see the start of the pipeline [Pipeline] Start of Pipeline. All that is done on the Master.
Unfortunately this cannot be changed as the Master needs to do this work to find out which agent type to delegate the job to.
Depending on how you are running you agents, you could use something like the EC2 Cloud Plugin to generate you agent nodes which wouldn't require a job to do it

How to run jenkins build alternatively on agent nodes?

Let's say I have a job A and also a agent configured. I want to run build 1 of Job A on master and build 2 of Job A on agent node.
Is there an option to achieve that ?
OR
Is there a way where my job looks at controller and if it already finds a build running, then start the next build on agent ?
Are you intending to run in parallel or just alternate? (Not a good idea to run jobs on master; could configure a node to run on same host as "master".). Seems to be parallel and you have restricted to one executor each on master and agent (you can have more, in which case any advice may be moot).
Nevertheless, Jenkins queue job allocation to executors is "sticky"; it tries to run where last run, unless unavailable. This can lead to overloading in nodes. So the M,A,M,A pattern is unnatural.
There are plugins that might help: Least Load, Scoring Load Balancer, but maybe not.
Perhaps an approach would be to restrict your job using a label and have a post-build groovy step that moves the label to the other upon success for the next run or two labels and the job self-modifies the label to match the other.

Run same pipeline in parallel in Jenkins

I am relatively new to Jenkins.
I created a declarative pipeline in Jenkins where users are asked to enter their branch name and then Jenkins builds that specific branch (for example, origin/mybranch).
This allows me to run a quick set of tests for specific branches.
The developers can run the pipeline multiple times and today I block multiple such pipelines from running simultaneously because if they do, one overwrites the other.
This happens because the first pipeline writes to c:\Jenkins\workspace\QuickBuild and when another such job run is writes to that exact same folder, killing the original run.
Blocking was the solution I found to prevent this but I would like it so that when one run is finishing up (using less than 8 cores) the next run in queue will already start running with whatever cores are freed up.
I would have though this would be a basic concept of Jenkins.
Am I missing something? Am I doing it wrong?
Following MaratC and Zett42's suggestions, I ended up adding this to my script:
agent
{
node {
customWorkspace "${params.Branch}"
}
}
This causes Jenkins to create each build in a different folder and they don't step on each others' toes.
The only downside is that you can't build the same branch simultaneously but that's a corner case.
Also, I could add a random number to the workspace to enable this as well.

Jenkins: Run only one build for a job on each node

We have a project where we have several Jenkins jobs:
One type of jobs that runs delivery (A),
one that does just compilation and unit tests (B)
and
one that runs integration tests, static code analysis et cetera (C).
We run on four Jenkins nodes (master + three slaves), and our jobs are a mix of declarative pipeline jobs and manually clicked in Jenkins-jobs.
We only want to run one integration test build per node at a time. However, we want to run as many deliveries (A) and code quality (B) builds as there are executors.
Until now, the Throttle concurrent builds (https://github.com/jenkinsci/throttle-concurrent-builds-plugin) plugin has satisfied our needs. However, this plugin does not support declarative pipeline builds, nor does it at all seem updated.
The Lockable resources plugin (https://github.com/jenkinsci/lockable-resources-plugin) seems promising, but we haven't found any way to lock the entire build with a resource-name dynamically set. That is, when we start a C build, we want it to lock "resource_{name of server}".
This plugin allows setting an entire-build-lock in the options directive,
נut we haven't figured out how to evaluate an environment variable there.
Any suggestions would be highly appreciated!
So the workaround on our side was to rewrite the pipeline script from declarative to scripted syntax. Then, the throttle concurrent builds-plugin works as a charm.
When the bug https://issues.jenkins-ci.org/browse/JENKINS-49173 is fixed, the plugin will work with declarative pipelines as well.

Why build executor status showing two jobs for one pipeline job?

I am using a groovy pipeline script for a build job, so in jenkins pipeline is like,
node
{
git url : 'myurl.git'
load 'mydir/myfile.groovy'
}
Its working well as expected. but in build executor status, It is showing it as two jobs running.
Why it is showing one job as two jobs with same name ?.
Is there something which i have missed to tell something to jenkins for pipeline job?
I can't find a better documentation source than this README (issue JENKINS-35710 also has some information), but the short of it is the Groovy pipeline executes on master (on a flyweight executor) while node blocks run on an allocated executor.
Here is a relevant snippet taken from the linked documentation:
[...]
Why are there two executors consumed by one Pipeline build?
Every Pipeline build itself runs on the master, using a flyweight executor — an uncounted slot that is assumed to not take any significant computational power.
This executor represents the actual Groovy script, which is almost always idle, waiting for a step to complete.
Flyweight executors are always available.

Resources