How can I retrieve dynamically-named stashes in a Jenkins pipeline? - jenkins

I have a Jenkins pipeline, which runs a suite of automated tests against a variety of environments in separate workers using the matrix directive. At the end of this, I would like to combine the code coverage output of the various test suite runs into a single file before collecting them, to ensure that the results are accurate. This sounds like it should be simple:
For each matrix cell, stash the coverage output file with a unique stash name, based on the matrix cell values.
After the test runs are complete, unstash all of the files on the "main" worker and combine them.
However, the fact that the stashes are dynamically named makes step 2 difficult. This leaves me, seemingly, with three options:
Hardcode the matrix axes again when unstashing. Not particularly appealing.
Retrieve the matrix axes programmatically. It seems like it should be possible, but I'm uncertain how to go from the FlowNodeWrapper representing the matrix stage to the underlying axis strings.
List all stashes for the build, and pick the ones I want. Also a viable solution if it's possible, since the stash names follow a pattern, but I'm not even sure where to start with this one. There is an open issue related to this in the Jenkins issue board, but it doesn't seem like it'll be moving anytime soon.
In short: how can I achieve this? How can I either:
Go from a FlowNodeWrapper to the matrix axes?
Find my stashes in a different way?

1. For each matrix cell, stash the coverage output file with a unique stash name, based on the matrix cell values.
Right. I'm not familiar with matrix, so I don't know for sure how you can get a unique name, but in many cases you can use env.STAGE_NAME.
2. After the test runs are complete, unstash all of the files on the "main" worker and combine them.
In step 1, keep track of the stash names you've used. Then step 2 is easy.
With a scripted pipeline, that's easy:
def stashes = [:]
…
stage(…) {
…
String stash_name = env.STAGE_NAME
stash stash_name, …
stashes[stash_name] = 1
}
…
stage('Coverage analysis') {
for (stash_name in stashes) {
unstash stash_name
}
…
}
I don't know if that works with a declarative pipeline.

Related

Jenkins workspace variable is swapping during parallel builds stages

I got a WS variable which is saving the Jenkins WORKSPACE variable.
During stages I always check if the WS == WORKSPACE Jenkins generic variable.
What I saw that during parallel runs ( when 2 different workspaces are created inside C:/jenkins/workspace#1 and C:/jenkins/workspace#2 the $WORKSPACE is swapping between those 2 parallels builds.
The problem is reproduced rarely , less than 10 percent of cases but I find it quite strange , for the picture above -> The first workspace its AVB_Aplicattions_BOSCH-3 its going trough 3 stages and in the 4th stages the $WORKSPACE variable its swapping with AVB_APLICATTIONS_BOSCH-4( the other parallel build).If I will look on the other build (AVB_APLICATTIONS_BOSCH-4 I will see the same problem-> the workspace is becoming AVB_APLICATTIONS_BOSCH-3).
During this builds I compile using cmake files, I'm afraid that the results will be not correct.
I was thinking to use the in build dir() function during each stage to be sure im on the same workspace .
Some one know a good approach for this case and also why its happening?
I don't want to deactivate parallel's builds.
The problem is that Jenkins variable scopes aren't really safe with parallel stages (see further sources / link). There are actually several questions and answers (and workaround) how to work with variables when using parallel stages. Long story short: don't use global variables but pay extra attention to define local variables with def to limit their scope:
Jenkins - matrix jobs - variables on different slaves overwrite each other?
Variables get overwritten in parallel stages
Parallel matrix and global variables without race condition?
Jenkins community links:
https://issues.jenkins.io/browse/JENKINS-55040
https://issues.jenkins.io/browse/JENKINS-54732

Jenkins plugin with viewing\aggregating possibilities depending on one of the parameters

I'm looking for plugin where I could have aggregation of settings and view for many cases, the same way it is in multi-branch pipeline. But instead of basing on various branches I want to base on one branch but varying on parameters. Below picture is from mentioned multi-branch pipeline, instead of "Branches" I'm looking for "Cases" and instead of "Name" column I need to have configurable Parameter.
Additionally to it, I need to have various Periodic build triggers in way
H 22 * * 5 %param1=value1 %param2=value3
H 22 * * 5 %param1=value2 %param2=value3
The second case could be done in standard job, but since there will be many such cases launched periodically every week or two weeks or every month, and difference in param1 is crucial and is important to have it readable and easily visible to quickly distinguish which case have failed.
I was looking for such plugin but couldn't find something like this. Maybe someone knows such plugin or way to solve it.
I have alternative of creating "super"job which in build steps would launch my current job with specific parameters. Then my readability would change from many rows to many columns since the number is over 20 it will IMHO significantly decrease readability(in column solution) and additionally not all cases would be launched with same periodicity. So there would be necessity to have some ready sets assigned by parameter, and most often the super build cases would have mostly skips in it. What would result that one might not see last result for one of the cases.
Note, that param2, has always same value for periodic launches. Other values are used only with manual trigger. Param2 can but doesn't have to be visible on "multi-branch pipeline" like solution.
I hope my explanation of issue is clear. Looking forward for answers\suggestions etc. :)

Jenkins - Declarative Pipeline - Multiple Key-Values pairs in Matrix Cell

I am building a Jenkins Declarative Pipeline.
Here's a gist of what I'm trying to do(as an arbitrary example)-
There is a list of platforms. I have put those in a matrix cell for readability and parallelism.
Each of them has an associated browser.
I want the matrix to be executed so that each Key-Values list iterates together.
For Example-
Platforms = ["Windows", "Mac", "Linux"]
Browsers = ["Edge", "Chrome", "Firefox"]
I want the output stages to have these pairings for (Platforms,Browsers)-
[("Windows", "Edge"),("Mac", "Chrome"),("Linux", "Firefox")]
In the actual case, this list is 12 long, so I don't want to define as many stages with when directives to pair these values manually, since everything else is the same in these stages.
Is there a way to do this, or a better approach?

Iterative processing in Dataflow

As shown here Dataflow pipelines are represented by a fixed DAG. I'm wondering if it's possible to implement a pipeline where the processing proceeds until a dynamically evaluated condition is satisfied based on the data computed so far.
Here's some pseudo code to illustrate what I'd like to implement:
PCollection pco = null
while(true):
pco = pco.apply(someTransform())
if (conditionSatisfied(pco)):
break
pco.Write()
It seems like you really want iterative computations. Right now Dataflow does not provide support for that, but we are aware that it is a very important use case and we are working on finding the right set of APIs to express it.
For now your workarounds are:
Iteratively run whole pipelines (run pipeline, inspect output, run again if the condition is not satisfied, etc). This has the obvious downside of pipeline setup and teardown overhead.
Build a pipeline with a hard-coded number of iterations by .apply()'ing in a loop unconditionally, then run the whole pipeline.
A combination of the two, e.g. run fixed 5-iteration pipelines until you're satisfied with the result.

Being clever when copying artifacts with Jenkins and multi-configurations

Suppose that I have a (fictional) set of projects: FOO and BAR. Both of these projects have some sort of multi-configuration option.
FOO has a matrix on axis X which takes values in { x1, ..., xn } (so there are n builds of FOO). BAR has a matrix on axis Y which takes values in { y1, ..., ym } (so there are m builds of BAR).
However, BAR needs to copy some artifacts from FOO. It turns out that Y is a strictly finer partition than n. For example, X might take the values { WINDOWS, LINUX } and Y might be { WINDOWS_XP, WINDOWS_7, DEBIAN_TESTING, FEDORA } or whatever.
Is it possible to get BAR to do some sort of table lookup to work out what configuration of FOO it needs when it copies artifacts across? I can easily write a shell script to spit out the mapping, but I can't work out how to invoke it when Jenkins is working out what it needs to copy.
At the moment, a hacky solution is to have two axes on FOO, on for X and one for Y, and then filter out combinations that don't make sense. But the resulting combination filter is ridiculous and the matrix is very sparse. Yuck.
A solution that I don't like is to parametrise FOO on Y instead: this would be a huge waste of compile time. And, worse, the generated artefacts are pretty big, so even if you did some sort of caching, you'd still have to keep unnecessary copies floating around.
Can't say I fully understand the intricacies if your matrices, but I think I can help you with your actual question
"I can easily write a shell script to spit out the mapping, but I can't work out how to invoke it when Jenkins is working out what it needs to copy"
The Archive the artifacts and Copy artifacts from another project post-build actions can take java style wildcards, like module/dist/**/*.zip as well as environment variables/parameters, like ${PARAM} for the list or artifacts. You can use commas , to add more artifacts.
The on-page help for Copy artifacts from another project states how to copy artifacts of a specific matrix configuration: To copy from a particular configuration, enter JOBNAME/AXIS=VALUE, this is for the Project Name attribute. That project name attribute can also contain params as ${PARAM}
So, in your BAR job, have a Copy Artifacts build step, with Project Name being FOO/X=${mymapping}. What this will do is: every time a configuration of BAR is run, it will copy artifacts only from FOO with configuration of X=${mymapping}.
Now you need to set the value of ${mymapping} dynamically every time BAR is run. A simple script like this may do the trick:
[[ ${Y:0:7} == "WINDOWS" ]] && mymapping=WINDOWS || mymapping=LINUX
Finally, you need to use EnvInject plugin to make this variable available to the rest of the build steps, including the Copy Artifacts step.
So, every time BAR configuration runs, it will look at its own configuration axis Y, and if that axis starts with WINDOWS, it will set the ${mymapping} to WINDOWS, else set it to LINUX. This ${mymapping} is then made available to the rest of the build steps. When Copy Artifacts build step is executed, it will only copy artifacts from FOO where the X axis matches ${mymapping} (i.e. either WINDOWS or LINUX).
Full Setup
Install EnvInject plugin.
In BAR job configuration, tick Prepare an environment for the run (part of EnvInject plugin).
Make sure both checkboxes for keeping existing variables are checked.
In Script Content copy your script:
[[ ${Y:0:7} == "WINDOWS" ]] && mymapping=WINDOWS || mymapping=LINUX
Under Build steps, configure Copy Artifacts build step.
Set Project name parameter to FOO/X=${mymapping}
Configure the rest as usual.

Resources