Continue using node after parallel runs - jenkins

After running tests in parallel, I need to immediately send out notifications. Currently, the parallel nodes are ran then node is given up and the send notifications sometimes waits for next available node.
// List of tasks, one for each marker/label type.
def farmTasks = ['ac', 'dc']
// Create a number of agent tasks that matches the marker/label type.
// Deploys the image to the board, then checks out the code on the agent and
// runs the tests against the board.
timestamps {
stage('Test') {
def test_tasks = [:]
for (int i = 0; i < farmTasks.size(); i++) {
String farmTask = farmTasks[i]
test_tasks["${farmTask}"] = {
node("linux && ${farmTask}") {
stage("${farmTask}: checkout on ${NODE_NAME}") {
// Checkout without clean
doCheckout(false)
}
stage("${farmTask} tests") {
<code>
}
} // end of node
} // end of test_tasks
} // end of for
parallel test_tasks
node('linux') {
sendMyNotifications();
}
} // end of Test stage
} // end of timestamps

Frankly, this code seems totally fine. I'm yet to understand how the notification needs to wait for a node (do you have more pipelines that use these agents? are there multiple instances of this pipeline running concurrently?), however the workaround to this issue is simple:
Set up another agent (it can reside on machines that already host existing agents) and give it a unique label (e.g. notifications) so its sole use will be to send notifications.
It's not perfect because you get a single point of failure, but it help remedy the situation while you figure out what causes the "real" agents to be unavailable after the parallel steps.

Related

Multiple job for same cronjob with different params in Jenkins

We are using a third-party service to create and use vouchers. There are 80k+ vouchers already made. One of our cronjobs checks the status (used/unused) of each voucher one by one synchronously and updates it in our server database. It takes 2hours to complete one pass, then it continues from the first voucher for the next pass.
Constraints:
the third party supports the 6 queries per second(QPS).
We have only a primary Jenkins server and no agent nodes.
With one Jenkins server can we improve the execution time?
Can we set up multiple jobs executing parallelly on a primary Jenkins server for the same cronjob? Like the first 50k records are processed by one of the jobs and the rest are processed by another.
If you have room to vertical scale your VM in case you hit a resource(CPU, Memory) bottleneck, you should be able to achieve the performance. IMV best option is using Parallel stages in your Pipeline. If you know the batch sizes beforehand, you can hardcode the sizes within each stage, If you want to add some logic to determine how many records you have and then based on that allocate records, you can create a Pipeline with dynamic stages, something like below.
pipeline {
agent any
stages {
stage('Parallel') {
steps {
script {
parallel parallelJobs()
}
}
}
}
}
def getBatches() {
// Have some logic to allocate batches
return ["1-20000", "20000-30000", "30000-50000"]
}
def parallelJobs() {
jobs = [:]
for (batch in getBatches()) {
jobs[batch] = { stage(batch) {
echo "Processing Batch $batch"
}
}
}
return jobs
}

How to trigger a replay of another build from another job programmatically

How can I trigger a replay of a build from another job?
Context of Problem: I want to be able to have a job that can prioritize a build over others for another job (that has concurrency disabled). I was thinking I could do this by killing / cancelling jobs in the queue, triggering the new job, and then replay the ones that were cancelled.
I think I know how to cancel the jobs in the queue. I.e. by something like:
def buildNumbers = []
def job = Jenkins.instance.getItemByFullName(TARGET_JOB)
def builds = job.builds
job = null
for (build in builds) {
if (build.isBuilding() && !(build.isInProgress())) {
if(build instanceof WorkflowRun) {
WorkflowRun run = (WorkflowRun) build
if(!dryRun) {
//hard kill
run.doKill()
//release pipeline concurrency locks
StageStepExecution.exit(run)
}
println "Killed ${run}"
buildNumbers.add(build.getNumber())
} else if(build instanceof FreeStyleBuild) {
FreeStyleBuild run = (FreeStyleBuild) build
if(!dryRun) {
run.executor.interrupt(Result.ABORTED)
}
println "Killed ${run}"
} else {
println "WARNING: Don't know how to handle ${item.class}"
}
}
}
But say I have saved these builds or build numbers that were killed, how can I replay them?
I am open to other alternatives as well that solves this problem of prioritizing one build ahead of another.

Jenkins - how to run a single stage using 2 agents

I have a script that acts as a "test driver" (TD). That is, it drives test operations on a "system under test" (SUT). When I run my test framework script (tfs.sh) on my TD, it takes a SUT as an argument. The manual workflow looks like this:
TD ~ $ ./tfs.sh --sut=<IP of SUT>
I want to have a cluster of SUTs (they will have different OSes, and each will repeat a few times), and a few TDs (like, 4 or 5, so driving tests won't be a bottleneck, actually executing them will be).
I don't know the Jenkins primitive with which to accomplish this. I would like it if a Jenkins stage could simply be invoked with 2 agents. One would obviously be the TD, that's what would actually run the script. And the other would be the SUT. Jenkins would manage locking & resource contention like this.
As a workaround, I could simply have all my SUTs entirely unmanaged by Jenkins, and manually implement locking of the SUTs so 2 different TDs don't try to grab the same one. But why re-invent the wheel? And besides, I'd rather work on a Jenkins plugin to accomplish this than on a manual solution.
How can I run a single Jenkins stage on 2 (or more) agents?
If I understand your requirement correctly, you have a static list of SUTs and you want Jenkins to start the TDs by allocating SUTs for each TD. I'm assuming TDs and SUTs have a one-to-one relationship. Following is a very simple example of how you can achieve what you need.
pipeline {
agent any
stages {
stage('parallel-run') {
steps {
script {
try {
def tests = getTestExecutionMap()
parallel tests
} catch (e) {
currentBuild.result = "FAILURE"
}
}
}
}
}
}
def getTestExecutionMap() {
def tests = [:]
def sutList = ["IP1", "IP2" , "IP3"]
int count = 0
for(String ip : sutList) {
tests["TEST${count}"] = {
node {
stage("TD with SUT ${ip}") {
script {
sh "./tfs.sh --sut=${ip}"
}
}
}
}
count++
}
return tests
}
The above pipeline will result in the following.
Further if you wan to select the agent you want to run the TD. You can specify the name of the agent in the node block. node(NAME) {...} . You can improve the Agent selection criteria accordingly. For example you can check how many Jenkins executors are idling for a given Agent and then decide how many TDs you will start there.

How can I wait for all executors inside Jenkinsfile's "parallel" block?

I'm new to Jenkins and configuring its scripts, so please forgive me if I say anything stupid.
I have a scripted Jenkins pipeline which redistributes building of the codebase to multiple nodes, implemented using a node block wrapped with parallel block. Now, the catch is that after the building, I would like to do a certain action with files that were just built, on one of the nodes that was building the code - but only after all of the nodes are done. Essentially, what I would like to have is something similar to barrier, but between Jenkins' nodes.
Simplified, my Jenkinsfile looks like this:
def buildConf = ["debug", "release"]
parallel buildConf.collectEntries { conf ->
[ conf, {
node {
sh "./checkout_and_build.sh"
// and here I need a barrier
if (conf == "debug") {
// I cannot do this outside this node block,
// because execution may be redirected to a node
// that doesn't have my files checked out and built
sh "./post_build.sh"
}
}
}]
}
Is there any way I can achieve this?
What you can do is add a global counter which counts the number of completed tasks, you need to instruct each task that have post job to wait until the counter is equal to the total number of tasks, first then you can do the post task parts. Like this:
def buildConf = ["debug", "release"]
def doneCounter = 0
parallel buildConf.collectEntries { conf ->
[ conf, {
node {
sh "./checkout_and_build.sh"
doneCounter++
// and here I need a barrier
if (conf == "debug") {
waitUntil { doneCounter == buildConf.size() }
// I cannot do this outside this node block,
// because execution may be redirected to a node
// that doesn't have my files checked out and built
sh "./post_build.sh"
}
}
}]
}
Please note, each task that has post task parts will block the executor until all other parallell tasks are done and the post part can be executed. If you have loads of executors or the tasks are fairly short, then this is probably not a problem. But if you have few executors it could lead to congestion. If you have less or equal number of executors than the total number of parallell tasks which need post work, then you can run into a deadlock!

throttling jenkins parallel in pipeline

I came across this message with the code below
in JENKINS-44085.
If I already have a map of branches that contains 50 items, but I want to parallel them 5 at a time, how do I need to modify this code?
My code already has a map of 50 items in a var named branches.
// put a number of items into the queue to allow that number of branches to run
for (int i=0;i<MAX_CONCURRENT;i++) {
latch.offer("$i")
}
for (int i=0; i < 500; i++) {
def name = "$i"
branches[name] = {
def thing = null
// this will not allow proceeding until there is something in the queue.
waitUntil {
thing = latch.pollFirst();
return thing != null;
}
try {
echo "Hello from $name"
sleep time: 5, unit: 'SECONDS'
echo "Goodbye from $name"
}
finally {
// put something back into the queue to allow others to proceed
latch.offer(thing)
}
}
}
timestamps {
parallel branches
}
This question is a bit old, but for me the problem was also relevant yesterday. In some cases your Jenkins jobs may be light on Jenkins but high on some other system, so you want to limit it for that system. In my opinion using max executors per build agent is not the right way to do that because if your Jenkins cluster scales you will have to adjust stuff.
To answer your question, you probably want to do something like this:
Create a branches map with numeric indexes "0", "1", etc.
In the try block of that code you pasted have something like: build(my_branches[name])
At least that's how I was using that same workaround before. But then someone at work pointed out a better solution. I also commented this simpler solution in the Jira ticket you refered to. It requires the Lockable Resources Plugin: https://wiki.jenkins.io/display/JENKINS/Lockable+Resources+Plugin
Go to: http://<your Jenkins URL>/configure and add X lockable resources with label "XYZ".
Use in your code as such:
def tests = [:]
for (...) {
def test_num="$i"
tests["$test_num"] = {
lock(label: "XYZ", quantity: 1, variable: "LOCKED") {
println "Locked resource: ${env.LOCKED}"
build(job: jobName, wait: true, parameters: parameters)
}
}
}
parallel tests
The nice thing about this is that you can use this across different jobs. In our case different jobs have a load on XYZ so having these global locks are very handy.

Resources