Jenkins multiple dependency does not work properly (Trigger,PrerequisitiesCheck) - jenkins

Background of the problem
We are using jenkins to build lots of projects that are dependent to some of the projects.
As most of you know, jenkins allows you to trigger another job if the build is stable (stability is an option that we want). And there is another tool in jenkins that allows you to "block build if certain jobs are running". Also there is an option as "PrerequisitiesCheck".
Let say there is project A triggering project B and B is triggering project C. For the simplicity, let me say this configuration as A->B->C. Let say there is another path like A->X->C. First problem, if A and B are built successfully, C is triggered even Y is being built at the same moment. Solution is to use "block build if certain jobs are running" option. Second problem is, when A triggers B and X, and if B fails, then nevertheless X is triggering C and C fails because of B failed already. That is something that we do not want. Solution (not exact solution) is to use "PrerequisitiesCheck" option. At least with that option the person responsible for the project C can understand the problem did not occur due to project C. Also, we have to use trigger option to be able to link these A->B->C and A->X->C projects each other.
Problem
Problem is so simple, We do not want to use these three options (Trigger, PrerequisitiesCheck, Block build if certain jobs are running) because it is too much work and most probably this complex structure will cause many problems (i.e. forgetting linking is the simples one). Is there any tool that is doing three of them at the same time? Do you know any plugin enabling us to solve that problem wity only one linking?

Multijob plugin will be of interest to you.
This is what the documentation says.
After installing this plugin you will be able to do the following:
When creating new Jenkins job you have an option to create MultiJob project.
This job can define in the Build section phases that contains one job or more.
All jobs belong to one phase will be executed in parallel (if there are enough executors on the node)
All jobs in phase 2 will be executed only after jobs in phase 1 are completed etc.
Since A is triggering both B and X, I will make them run in parallel(making them part of same phase) and trigger C only when both are done.

Maybe the Build Flow Plugin is what you are looking for:
Build Flow Plugin
There you can write a little script which triggers your existing jobs, for example:
parallel (
// job 1, 2 and 3 will be scheduled in parallel.
{ build("job1") },
{ build("job2") },
{ build("job3") }
)
// job4 will be triggered after jobs 1, 2 and 3 complete
build("job4")

Related

How can I prioritize later steps when running Jenkins pipeline scripts?

Suppose I have two similar simple pipeline scripts, each of which basically look like this:
stage('only stage') {
node {
// first step
}
node {
// second step
}
}
Let's call the steps in my first script A and B, and the steps in my similar second script C and D.
The way I understand it from my testing, as I kick off builds for these scripts, Jenkins appears to queue only the first step from each script, and then queue the next step only when the first one finishes. As a result, it appears to favour earlier running steps in pipelines in general. Consider this example which maps actions I've taken => the resulting Jenkins queue of steps:
Kick off script 1 => [A]
Kick off script 2 => [A, C]
A starts => [C]
A finishes => [C, B]
C starts => [B]
...etc
My question is: at the last step I've depicted where C starts, is there a way to prioritize B to start instead, since it's a later step in a build (vs. C which is a first step)? My motivation is that I would rather have one build completely finished sooner, vs. several builds simultaneously in progress / partially finished.
It seems like this was possible with freestyle projects through the Parameterized Trigger plugin (details in this question), but I haven't been able to figure out if there's a similar way to make this work with pipeline scripts. I've seen the Priority Sorter plugin which apparently has recently added pipeline compatibility, but I didn't understand how that pipeline compatibility actually worked as I was unable to find any examples or documentation on its usage, and I'm not sure hardcoding priority numbers into my steps would be an ideal solution anyway (in reality, the scripts are much more complex than the examples I provided, so I could see priority numbers quickly getting unwieldy and confusing). There are a few other "priority" based plugins I found, notably the Accelerated Build Now plugin, but it hasn't been updated in years so it doesn't have pipeline support.
So far the attitude I've seen is that in situations where queues are forming and prioritization of tasks is needed, "just add more slaves". This makes sense as a design decision, but unfortunately I'm working with limited resources and do need some queue management as I can't just add more slaves to relieve the queue. Has anyone else solved this?

Include a different job in a job's build steps in Jenkins

I am trying to make this rather unique build flow and I haven't found a plugin or a way to do it with jenkins yet.
There is one job called "JOB A" which is used by itself and creates a standalone installer.
Then there is "JOB B" which creates another installer but it needs to include everything built in "JOB A" in addition to some other stuff. Now I could just copy JOB A build steps into JOB B, but I want to actually build JOB A and maybe even use those artifacts later as well.
It cannot be a build trigger cause JOB B needs to continue building after JOB A has finished and I cannot use something like flow because that creates JOB C and only sequences other jobs and I would need to go into A and B to get the artifacts.
Bonus points would be if it checked JOB A source code in git for any changes since its last build when building JOB B and decide if it needs to build it again.
I looked at many plugins and I can't seem to find one that would do this.
I hope my explanation was not confusing. Sorry if it was, I could elaborate.
If I understand correctly what you want, then what you need is:
Custom (shared) workspace
Parameterized Trigger Plugin
For both, JOB A and JOB B, setup Custom Workspace to the same folder on the server (You can even leave JOB A workspace as is, and just point JOB B custome workspace to workspace of JOB A. I am not at my work computer with Jenkins and can't provide screenshots, so I will borrow this great guide for more info on how to setup custom workspace
Then, whenever appropriate, have JOB A execute a build step Trigger/call builds on other projects, namely JOB B. You can even pass it all the same parameters that JOB A had. By default, this will not wait for JOB B to complete. It will kick off JOB B, meanwhile JOB A will finish running, and then JOB B completes whenever it is done.
If needed, you can check-mark Block until triggered projects finish their builds, and then JOB A will wait for JOB B to finish before continuing.
So, the above will:
Share workspace, and not do extra checkouts if code didn't change
Let JOB A and JOB B exist independently, with it's own artifacts, and each being able to be triggered separately.
JOB B will get everything from JOB A through shared workspace and passed parameters.

Start build after finishing some others

We have some builds that depents on each other in a kind of tree structure:
A
AA
AB
ABA
AC
B
BA
BB
BBA
BBAA
BBAAA
BBAB
C
...
Another build should be triggered if all of these builds have finished. Unfortunatelly it is not possible to say which build will be always the last that finished to use this to trigger the following task.
Is there a chance (maybe a plugin) that allows to trigger a new build when each build of a list of builds has finished?
Thanks in advance!
Frank
Take a look at Join Plugin:
https://wiki.jenkins-ci.org/display/JENKINS/Join+Plugin
This plugin allows a job to be run after all the immediate downstream
jobs have completed. In this way, the execution can branch out and
perform many steps in parallel, and then run a final aggregation step
just once after all the parallel work is finished.
Although this is an old question, you might consider restructuring your build pipeline completely using the Build Flow Plugin
https://wiki.jenkins-ci.org/display/JENKINS/Build+Flow+Plugin
It will have the advantage of keeping your pipeline logic in one place.

Jenkins - Running instances of single build concurrently

I'd like to be able to run several builds of the same Jenkins job simultaneously.
Example:
Build [*jenkins_job_1*]: calls an ant script with parameter 'A'
Build [*jenkins_job_1*]: calls an ant script with parameter 'B'
repeat as necessary
each instance of the job runs simultaneously, rather than through a queue.
The reason I'd like to do this is to avoid having to create several jobs that are nearly identical, all of which would need to be maintained.
Is there a way to do this, or maybe another solution (ie — dynamically create a job from a base job and remove it after it's finished)?
Jenkins has a check box: "Execute concurrent builds if necessary"
If you check this, then it'll start multiple builds for a job.
This works with the "This build is parameterized" checkbox.
You would still trigger the builds, passing your A or B as parameters. You can use another job to trigger them or you could do it manually via a script.
You can select Build a Multi-configuration project (Matrix build) when you create the job. Then, under the job's configuration, you can define the Configuration Matrix which lets you specify one or more parameters (axes) for different builds. Regarding running simultaneously, you should be able to run as many simultaneous builds as you have executors (with the appropriate label).
Unfortunately, the Jenkins wiki lacks documentation about this setup. There are a couple previous SO questions, here and here, that might provide a little guidance. There was a "recent" blog post about setting up a multi-configuration job to perform builds on various platforms.
A newer (and better) solution is the Jenkins Job DSL Plugin.
We've been using it with great success. Our job configurations are now disposable... we can set up a huge stack of complicated jobs from some groovy files and a couple template jobs. It's great.
I'm liking it a lot more than the matrix builds, which were complicated and harder to understand.
Nothing stopping you doing this using the Jenkins pipeline DSL.
We have the same pipeline running in parallel in order to model combined loads for an application that exposes web services, provides a database to several external applications, receives data via several work queues and has a GUI front end. The business gives us non-functional requirements (NFRs) which our application must meet that guarantees its responsiveness even at busy times.
The different instances of the pipeline are run with different parameters. The first instance might be WS_Load, the second GUI_Load and the third Daily_Update_Load, modelling a large data queue that needs processing within a certain time-frame. More can be added depending on which combination of loads we're wanting to test.
Other answers have talked about the checkboxes for concurrent builds, but I wanted to mention another issue: resource contention.
If your pipeline uses temporary files or stashes files between pipeline stages, the instances can end up pulling the rug from under each others' feet. For example you can end up overwriting a file in one concurrent instance while another instance expects to find the pre-overwritten version of the same stash. We use the following code to ensure stashes and temporary filenames are unique per concurrent instance:
def concurrentStash(stashName, String includes) {
/* make a stash unique to this pipeline and build
that can be unstashed using concurrentUnstash() */
echo "Safe stashing $includes in ${concurrentSafeName(stashName)}..."
stash name: concurrentSafeName(stashName), includes: includes
}
def concurrentSafeName(name) {
/* make a name or name component unique to this pipeline and build
* guards against contention caused by two or more builds from the same
* Jenkinsfile trying to:
* - read/write/delete the same file
* - stash/unstash under the same name
*/
"${name}-${BUILD_NUMBER}-${JOB_NAME}"
}
def concurrentUnstash(stashName) {
echo "Safe unstashing ${concurrentSafeName(stashName)}..."
unstash name: concurrentSafeName(stashName)
}
We can then use concurrentStash stashName and concurrentUnstash stashName and the concurrent instances will have no conflict.
If, say, the two pipelines both need to store stats, we can do something like this for filenames:
def statsDir = concurrentSafeName('stats')
and then the instances will each use a unique filename to store their output.
You can create a build and configure it with parameters. Click the This build is parameterized checkbox and add your desired param(s) in the Configuration of the build. You can then fire off simultaneous builds using different parameters.
Side note: The "Bulk Builder" in Jenkins might push it into a queue, but there's also a This bulk build is parameterized checkbox.
I was having a pretty large build queue and I performed below steps to run jobs in
parallel in jenkins to reduce number of jobs waiting in queue
For each job you need to navigate to configure and select the checkbox stating
"Execute concurrent builds if necessary"
Navigate to Manage -> Configure System -> look for "# of executors" and set the no
of parallel executors you want (in my case it was set to 0 and I updated it to 2)

Hudson dependencies

I have set up my hudson job A. Job A depends on job B and C. I have set them up with "Build other projects". This works well, although each job is in separate directory in my workspace (default structure). But I need job B and C in jobs A workspace (root folder).
I have considered two approaches:
Change the workspace for job A and push that variable to job via "Trigger parameterized build on other projects" and then use ant build script to copy them to that location, since I couldnt find an option to change the folder where job B or C should go
Trigger job B and then C from build script as part of job A. This is done via remote calls (found it somewhere on stackoverflow), but that option is missing in my configuration and I couldnt find any plugin that would add it.
Ideal approach for me would be to use ant build script and trigger job B and C from there with antsvn or something like that. But I cant find a solid example of this.
Reason why I want it this way is simple - job B is CMS which is essential for job A and job C has python scripts that need to be executed before new version can land on production server (this is already done with py-ant).
Or maybe there is some better way to manage dependencies like this. Any help is appreciated.
I hope it makes sense.
Think of Jobs "B" and "C" as producing "artifacts" that Job "A" needs. Then, all you have to do is import the artifacts produced by Jobs "B" and "C" whenever you build Job "A".
Your jobs shouldn't share workspaces. Otherwise what happens if Job "A" is building when Job "B" or "C" is triggered? You'll have multiple builds going on at once. However, if you separate out what "A" needs from jobs "B" and "C", you can have Job "A" import those dependencies. There are two ways of doing this:
The hard but correct way: You should create a release repository where jobs can fetch the artifacts they need. If this sounds Mavinish to you, well, it is. However, I've used Maven architectural stuff without Maven projects and it works fine. You can use something like Artifactory or Nexus as your release repository. Then use wget or curl to fetch the items from the repository and use Maven's deploy:deploy-file plugin to send the stuff over. You will need Maven (which is a Java process) to run deploy:deploy-file, but you don't need a Maven project, or even a Java project. The deploy:deploy-file plugin doesn't even require a Maven pom.xml file. Think of it more like a command line utility to send stuff to your release repository.
The easy, but incorrect way: Hudson has a Copy Artifacts plugin that you can use to do this. The problem is that it's easy to setup, but hard to start tracking. Plus, it makes you dependent upon a very specific tool. If you decide to move away from Hudson, you might not be able to duplicate this functionality.

Resources