Jenkins workspace variable is swapping during parallel builds stages - jenkins

I got a WS variable which is saving the Jenkins WORKSPACE variable.
During stages I always check if the WS == WORKSPACE Jenkins generic variable.
What I saw that during parallel runs ( when 2 different workspaces are created inside C:/jenkins/workspace#1 and C:/jenkins/workspace#2 the $WORKSPACE is swapping between those 2 parallels builds.
The problem is reproduced rarely , less than 10 percent of cases but I find it quite strange , for the picture above -> The first workspace its AVB_Aplicattions_BOSCH-3 its going trough 3 stages and in the 4th stages the $WORKSPACE variable its swapping with AVB_APLICATTIONS_BOSCH-4( the other parallel build).If I will look on the other build (AVB_APLICATTIONS_BOSCH-4 I will see the same problem-> the workspace is becoming AVB_APLICATTIONS_BOSCH-3).
During this builds I compile using cmake files, I'm afraid that the results will be not correct.
I was thinking to use the in build dir() function during each stage to be sure im on the same workspace .
Some one know a good approach for this case and also why its happening?
I don't want to deactivate parallel's builds.

The problem is that Jenkins variable scopes aren't really safe with parallel stages (see further sources / link). There are actually several questions and answers (and workaround) how to work with variables when using parallel stages. Long story short: don't use global variables but pay extra attention to define local variables with def to limit their scope:
Jenkins - matrix jobs - variables on different slaves overwrite each other?
Variables get overwritten in parallel stages
Parallel matrix and global variables without race condition?
Jenkins community links:
https://issues.jenkins.io/browse/JENKINS-55040
https://issues.jenkins.io/browse/JENKINS-54732

Related

How can I retrieve dynamically-named stashes in a Jenkins pipeline?

I have a Jenkins pipeline, which runs a suite of automated tests against a variety of environments in separate workers using the matrix directive. At the end of this, I would like to combine the code coverage output of the various test suite runs into a single file before collecting them, to ensure that the results are accurate. This sounds like it should be simple:
For each matrix cell, stash the coverage output file with a unique stash name, based on the matrix cell values.
After the test runs are complete, unstash all of the files on the "main" worker and combine them.
However, the fact that the stashes are dynamically named makes step 2 difficult. This leaves me, seemingly, with three options:
Hardcode the matrix axes again when unstashing. Not particularly appealing.
Retrieve the matrix axes programmatically. It seems like it should be possible, but I'm uncertain how to go from the FlowNodeWrapper representing the matrix stage to the underlying axis strings.
List all stashes for the build, and pick the ones I want. Also a viable solution if it's possible, since the stash names follow a pattern, but I'm not even sure where to start with this one. There is an open issue related to this in the Jenkins issue board, but it doesn't seem like it'll be moving anytime soon.
In short: how can I achieve this? How can I either:
Go from a FlowNodeWrapper to the matrix axes?
Find my stashes in a different way?
1. For each matrix cell, stash the coverage output file with a unique stash name, based on the matrix cell values.
Right. I'm not familiar with matrix, so I don't know for sure how you can get a unique name, but in many cases you can use env.STAGE_NAME.
2. After the test runs are complete, unstash all of the files on the "main" worker and combine them.
In step 1, keep track of the stash names you've used. Then step 2 is easy.
With a scripted pipeline, that's easy:
def stashes = [:]
…
stage(…) {
…
String stash_name = env.STAGE_NAME
stash stash_name, …
stashes[stash_name] = 1
}
…
stage('Coverage analysis') {
for (stash_name in stashes) {
unstash stash_name
}
…
}
I don't know if that works with a declarative pipeline.

Are labels of Jenkins build slaves checked in a case sensitive manner for job scripts?

When i have two build clients, where one has a label of "Windows" (1st char is capitalized) and the other has a label of "windows" (all lower case), will i either need to write a job label formula of "(Windows || windows)" (assumed the case of the label is respected) or is either "Windows" or "windows" (assumed the comparison is case-insensitive) sufficient to freely run the job on any of both machines, whichever is first or free?
I have to ask, because i felt like i was unable to determine from docs in what fashion this is set up. (Some docs even indicate that some other check-operations are configurable in respect to case'ness.)
The Node labels are case sensitive in jenkins. So, When you write (Windows || windows) as a target node, jenkins will first try to run the job on the agent with label "Windows" in case if that agent doesn't respond then it will try to run the same job on the second agent with label "windows". If you want to run a job freely on any of the available agents then there are two way to accomplish that
Define the RegEx for those agents with OR (||) symbol (for example "Windows || windows"), which you already have.
Have the same label name on both agents (for example "windows") and have your job run with label "windows". It will run in a little different manner. In this case when you run that job with target label "windows", jenkins will send the request to both nodes but jenkins will run the job on the agent which will respond first.

Iterative processing in Dataflow

As shown here Dataflow pipelines are represented by a fixed DAG. I'm wondering if it's possible to implement a pipeline where the processing proceeds until a dynamically evaluated condition is satisfied based on the data computed so far.
Here's some pseudo code to illustrate what I'd like to implement:
PCollection pco = null
while(true):
pco = pco.apply(someTransform())
if (conditionSatisfied(pco)):
break
pco.Write()
It seems like you really want iterative computations. Right now Dataflow does not provide support for that, but we are aware that it is a very important use case and we are working on finding the right set of APIs to express it.
For now your workarounds are:
Iteratively run whole pipelines (run pipeline, inspect output, run again if the condition is not satisfied, etc). This has the obvious downside of pipeline setup and teardown overhead.
Build a pipeline with a hard-coded number of iterations by .apply()'ing in a loop unconditionally, then run the whole pipeline.
A combination of the two, e.g. run fixed 5-iteration pipelines until you're satisfied with the result.

Being clever when copying artifacts with Jenkins and multi-configurations

Suppose that I have a (fictional) set of projects: FOO and BAR. Both of these projects have some sort of multi-configuration option.
FOO has a matrix on axis X which takes values in { x1, ..., xn } (so there are n builds of FOO). BAR has a matrix on axis Y which takes values in { y1, ..., ym } (so there are m builds of BAR).
However, BAR needs to copy some artifacts from FOO. It turns out that Y is a strictly finer partition than n. For example, X might take the values { WINDOWS, LINUX } and Y might be { WINDOWS_XP, WINDOWS_7, DEBIAN_TESTING, FEDORA } or whatever.
Is it possible to get BAR to do some sort of table lookup to work out what configuration of FOO it needs when it copies artifacts across? I can easily write a shell script to spit out the mapping, but I can't work out how to invoke it when Jenkins is working out what it needs to copy.
At the moment, a hacky solution is to have two axes on FOO, on for X and one for Y, and then filter out combinations that don't make sense. But the resulting combination filter is ridiculous and the matrix is very sparse. Yuck.
A solution that I don't like is to parametrise FOO on Y instead: this would be a huge waste of compile time. And, worse, the generated artefacts are pretty big, so even if you did some sort of caching, you'd still have to keep unnecessary copies floating around.
Can't say I fully understand the intricacies if your matrices, but I think I can help you with your actual question
"I can easily write a shell script to spit out the mapping, but I can't work out how to invoke it when Jenkins is working out what it needs to copy"
The Archive the artifacts and Copy artifacts from another project post-build actions can take java style wildcards, like module/dist/**/*.zip as well as environment variables/parameters, like ${PARAM} for the list or artifacts. You can use commas , to add more artifacts.
The on-page help for Copy artifacts from another project states how to copy artifacts of a specific matrix configuration: To copy from a particular configuration, enter JOBNAME/AXIS=VALUE, this is for the Project Name attribute. That project name attribute can also contain params as ${PARAM}
So, in your BAR job, have a Copy Artifacts build step, with Project Name being FOO/X=${mymapping}. What this will do is: every time a configuration of BAR is run, it will copy artifacts only from FOO with configuration of X=${mymapping}.
Now you need to set the value of ${mymapping} dynamically every time BAR is run. A simple script like this may do the trick:
[[ ${Y:0:7} == "WINDOWS" ]] && mymapping=WINDOWS || mymapping=LINUX
Finally, you need to use EnvInject plugin to make this variable available to the rest of the build steps, including the Copy Artifacts step.
So, every time BAR configuration runs, it will look at its own configuration axis Y, and if that axis starts with WINDOWS, it will set the ${mymapping} to WINDOWS, else set it to LINUX. This ${mymapping} is then made available to the rest of the build steps. When Copy Artifacts build step is executed, it will only copy artifacts from FOO where the X axis matches ${mymapping} (i.e. either WINDOWS or LINUX).
Full Setup
Install EnvInject plugin.
In BAR job configuration, tick Prepare an environment for the run (part of EnvInject plugin).
Make sure both checkboxes for keeping existing variables are checked.
In Script Content copy your script:
[[ ${Y:0:7} == "WINDOWS" ]] && mymapping=WINDOWS || mymapping=LINUX
Under Build steps, configure Copy Artifacts build step.
Set Project name parameter to FOO/X=${mymapping}
Configure the rest as usual.

Jenkins (Hudson) - Managing dependencies between parallel builds

Using Jenkins or Hudson I would like to create a pipeline of builds with fork and join points, for example:
job A
/ \
job B job C
| |
job D |
\ /
job E
I would like to create arbitrary series-parallel graphs like this and leave Jenkins the scheduling freedom to execute B/D and C in parallel whenever a slave is available.
The Join Plugin immediately joins after B has executed. The Build Pipeline Plugin does not support fork/join points. Not sure if this is possible with the Throttle Concurrent Builds Plugin (or deprecated Locks & Latches Plugin); if so I could not figure out how. One solution could be to specify build dependencies with Apache Ivy and use the Ivy Plugin. However, my jobs are all Makefile C/C++/shell script jobs and I have no experience with Ivy to verify if this is possible.
What is the best way to specify parallel jobs and their dependencies in Jenkins?
There is a Build Flow plugin that meets this very need. It defines a DSL for specifying parallel jobs. Your example might be written like this:
build("job A")
parallel (
{
build("job B")
build("job D")
},
{
build("job C")
}
)
build("job E")
I just found it and it is exactly what I was looking for.
There is one solution that might work for you. It requires that all builds start with a single Job and end with a definite series of jobs at the end of each chain; in your diagram, "job A" would be the starting job, jobs C and D would be the terminating jobs.
Have Job A create a fingerprinted file. Job A can then start multiple chains of builds, B/D and C in this example. Also on Job A, add a promotion via the Promotions Plugin, whose criteria is the successful completion of the successive jobs - in this case, C and D. As part of the promotion, include a trigger of the final job, in your case Job E. This can be done with the Parameterized Trigger Plugin. Then, make sure that each of the jobs you list in the promotion criteria also fingerprint the same file and get the same fingerprint; I use the Copy Artifact Plugin to ensure I get the exact same file each time.

Resources