I have jenkins shared library which contains internal method (let's call it A())
Currently the A() method run sequentially between 10-100 times.
I want to run A() in parallel with pre define bulk number (10)
I didn't find a way to implement it via groovy, some of the method Jenkins require approval.
What is the way to do it via groovy ( not parallel steps of jenkis) without require any jenkins approval?
I don't know if you need permissions for this, but just try it:
def A = {
println "hello from A($it)"
Thread.sleep(Math.random() * 2000 as long)
println "hello from A($it)"
}
def threads = (0..5).collect { counter ->
Thread.start { A(counter) }
}
threads.each { it.join() }
I put some println and some sleep to demonstrate the methods run in parallel. You can replace the body of A{} as you wish.
Related
I have a script that acts as a "test driver" (TD). That is, it drives test operations on a "system under test" (SUT). When I run my test framework script (tfs.sh) on my TD, it takes a SUT as an argument. The manual workflow looks like this:
TD ~ $ ./tfs.sh --sut=<IP of SUT>
I want to have a cluster of SUTs (they will have different OSes, and each will repeat a few times), and a few TDs (like, 4 or 5, so driving tests won't be a bottleneck, actually executing them will be).
I don't know the Jenkins primitive with which to accomplish this. I would like it if a Jenkins stage could simply be invoked with 2 agents. One would obviously be the TD, that's what would actually run the script. And the other would be the SUT. Jenkins would manage locking & resource contention like this.
As a workaround, I could simply have all my SUTs entirely unmanaged by Jenkins, and manually implement locking of the SUTs so 2 different TDs don't try to grab the same one. But why re-invent the wheel? And besides, I'd rather work on a Jenkins plugin to accomplish this than on a manual solution.
How can I run a single Jenkins stage on 2 (or more) agents?
If I understand your requirement correctly, you have a static list of SUTs and you want Jenkins to start the TDs by allocating SUTs for each TD. I'm assuming TDs and SUTs have a one-to-one relationship. Following is a very simple example of how you can achieve what you need.
pipeline {
agent any
stages {
stage('parallel-run') {
steps {
script {
try {
def tests = getTestExecutionMap()
parallel tests
} catch (e) {
currentBuild.result = "FAILURE"
}
}
}
}
}
}
def getTestExecutionMap() {
def tests = [:]
def sutList = ["IP1", "IP2" , "IP3"]
int count = 0
for(String ip : sutList) {
tests["TEST${count}"] = {
node {
stage("TD with SUT ${ip}") {
script {
sh "./tfs.sh --sut=${ip}"
}
}
}
}
count++
}
return tests
}
The above pipeline will result in the following.
Further if you wan to select the agent you want to run the TD. You can specify the name of the agent in the node block. node(NAME) {...} . You can improve the Agent selection criteria accordingly. For example you can check how many Jenkins executors are idling for a given Agent and then decide how many TDs you will start there.
I need to create a Jenkins pipeline to clear the build queue periodically. I have the script that i can run in the script console, but it does not work when i try to run it in the pipeline.
def q = Jenkins.instance.queue for (queued in Jenkins.instance.queue.items) { q.cancel(queued.task) }
I am pretty sure it has to do with importing classes, but i am having issues writing this script.
If you just want to clear the enitre build queue from a pipeline you can use the clear method from the Jenkins Queue object which will clear the entire build queue:
Jenkins.instance.queue.clear()
Notice: you will probably get a RejectedAccessException: Scripts not permitted to use method error when running this script, which means a Jenkins admin will have to approve it.
In addition, if you are running this code in a declarative pipeline make sure to warp it with a script directive.
If you are using Jenkins Shared Libraries you can create a more generic clearBuildQueue function inside your library and later on use it in your various pipelines.
It can look something like:
#NonCPS
def clearBuildQueue(String pattern = ''){
def queue = Jenkins.instance.queue
if(pattern) {
queue.items.findAll { it.task.name =~ pattern}.each {
println "Item '${it.task.name}' was cleared from the queue"
queue.cancel(it.task)
}
}
else {
println "Clearing ${queue.items.length} items from the queue"
queue.clear()
}
}
I have a somewhat unique setup where I need to be able to dynamically load Jenkinsfiles that live outside of the src I'm building. The Jenkinsfiles themselves usually call node() and then some build steps. This causes multiple executors to be eaten up unnecessarily because I need to have already called node() in order to use the load step to run a Jenkinsfile, or to execute the groovy if I read the Jenkinsfile as a string and execute it.
What I have in the job UI today:
#Library(value='myGlobalLib#head', changelog=fase) _
node{
load "${JENKINSFILES_ROOT}/${PROJECT_NAME}/Jenkinsfile"
}
The Jenkinsfile that's loaded usually also calls node(). For example:
node('agent-type-foo'){
someBuildFlavor{
buildProperty = "some value unique to this build"
someConfig = ["VALUE1", "VALUE2", "VALUE3"]
runTestTarget = true
}
}
This causes 2 executors to be consumed during the pipeline run. Ideally, I load the Jenkinsfiles without first calling node(), but whenever I try, I get an error message stating:
"Required context class hudson.FilePath is missing
Perhaps you forgot to surround the code with a step that provides this, such as: node"
Is there any way to load a Jenkinsfile or execute groovy without first having hudson.FilePath context? I can't seem to find anything in the doc. I'm at the point where I'm going to preprocess the Jenkinsfiles to remove their initial call to node() and call node() with the value the Jenkinsfile was using, then load the rest of the file, but, that's somewhat too brittle for me to be happy with.
When using load step Jenkins evaluates the file. You can wrap your Jenkinsfile's logics into a function (named run() in my example) so that it will load but not run automatically.
def run() {
node('agent-type-foo'){
someBuildFlavor{
buildProperty = "some value unique to this build"
someConfig = ["VALUE1", "VALUE2", "VALUE3"]
runTestTarget = true
}
}
}
// This return statement is important in the end of Jenkinsfile
return this
Call it from your job script like this:
def jenkinsfile
node{
jenkinsfile = load "${JENKINSFILES_ROOT}/${PROJECT_NAME}/Jenkinsfile"
}
jenkinsfile.run()
This way there is no more nested node blocks because the first gets closed before run() function is called.
I have a requirement to run a set of tasks for a build in parallel, The tasks for the build are dynamic it may change. I need some help in the implementation of that below are the details of it.
I tasks details for a build will be generated dynamically in an xml which will have information of which tasks has to be executed in parallel/serial
example:
say there is a build A.
Which had below task and the order of execution , first task 1 has to be executed next task2 and task3 will be executed in parallel and next is task 4
task1
task2,task3
task4
These details will be in an xml dynamically generated , how can i parse that xml and schedule task accordingly using pipeline plugin. I need some idea to start of with.
You can use Groovy to read the file from the workspace (readFile) and then generate the map containing the different closures, similar to the following:
parallel(
task2: {
node {
unstash('my-workspace')
sh('...')
}
},
task3: {
node {
unstash('my-workspace')
sh('...')
}
}
}
In order to generate such data structure, you simply iterate over the task data read using XML parsing in Groovy over the file contents you read previously.
By occasion, I gave a talk about pipelines yesterday and included very similar example (presentation, slide 34ff.). In contrast, I read the list of "tasks" from another command output. The complete code can be found here (I avoid pasting all of this here and instead refer to this off-site resource).
The kind of magic bit is the following:
def parallelConverge(ArrayList<String> instanceNames) {
def parallelNodes = [:]
for (int i = 0; i < instanceNames.size(); i++) {
def instanceName = instanceNames.get(i)
parallelNodes[instanceName] = this.getNodeForInstance(instanceName)
}
parallel parallelNodes
}
def Closure getNodeForInstance(String instanceName) {
return {
// this node (one per instance) is later executed in parallel
node {
// restore workspace
unstash('my-workspace')
sh('kitchen test --destroy always ' + instanceName)
}
}
}
I have a very long workflow for building and testing our application. So long, in fact, that when we try to load the main workflow script, we get this exception:
java.lang.ClassFormatError: Invalid method Code length 67768 in class file WorkflowScript
I am not proud of this. I'm tying to split the workflow into smaller scripts that we load from the main workflow script, but are running into an issue with variable scoping. For example:
def a = 'foo' //some variable referenced in multiple workflow stages
node {
echo a
}
//... and then a whole bunch of other stages
might become
def a = 'foo' //some variable referenced in multiple workflow stages
node {
git: ...
load 'flowPartA.groovy'
}()
where flowPartA.groovy looks like:
{ ->
node {
echo a
}
}
Based on my understanding of the documentation, where flowPartA.groovy is interpreted as a closure, I expect the variable 'a' would remain in scope, but instead, I get an exception to the contrary.
groovy.lang.MissingPropertyException: No such property: a for class: groovy.lang.Binding
Am I missing something about the way workflow interprets the flow scripts? Is there a good way to take a huge workflow that uses many, many parameters and split it into smaller chunks?
You have to define a function in the external groovy and call it passing all required parameters:
def a = 'foo'
node('slave') {
git '…'
def flow = load 'flowPartA.groovy'
flow.echoFromA(a)
}
And flowPartA.groovy contains:
def echoFromA(String a) {
echo a
}
return this
See the documentation for more information.