I need to build and test on multiple configurations: linux, osx and
solaris. I have slave nodes labeled "linux", "osx" and "solaris". On
each configuration, I want to (a) build (b) run smoke tests
(c) if smoke tests pass, then run full tests, and perhaps more.
I thought that multi-configuration jobs might be the answer, so I setup a
multi-configuration build job and it starts concurrent builds on each
OS. The build job will trigger a downstream smoke-test build, which, in
turn, triggers the full-test job.
I've run into the following issues
If one of the configurations fails, the job as a whole fails, and
Jenkins will not fire any downstream jobs (e.g., if the solaris build
fails, Jenkins will not run smoke tests or full tests for osx and
linux).
The solaris build takes about twice as long as the others (on the
order of an hour), and I'd prefer the linux and osx smoke tests not
wait for the solaris build to finish.
Does that mean I'm left with hand-crafting three pipelines of jobs, and
putting them behind a "start-all" job (i.e., creating and hand-chaining
the following jobs)?
build-linux smoke-test-linux full-test-linux
build-osx smoke-test-osx full-test-osx
build-solaris smoke-test-solaris full-test-solaris
Did I miss something obvious?
As far as I know the answer is to create 3 matrix jobs, one for each system. They then would have 3 subjobs (build, smoke-test, fulltest) with the build-job as a touchstone.
Have you thought about combining the build, smoke-test and full tests into a single multi-configuration job? Other than being a little messy, this should work for you.
To answer your first issue: to trigger a downstream job regardless of result, use trigger parameterized build to run when complete (always trigger) and then check "build w/o parameters"
To answer your second issue: either use an all encompassing multi-configuration (matrix) job or use three separate job streams as you mentioned. UPDATE: you could run 3 sequential matrix jobs for each step (build, smoke-test, full tests) but it would mean that if any of the build steps failed then none of the smoke-tests would be run.
Related
We have a project where we have several Jenkins jobs:
One type of jobs that runs delivery (A),
one that does just compilation and unit tests (B)
and
one that runs integration tests, static code analysis et cetera (C).
We run on four Jenkins nodes (master + three slaves), and our jobs are a mix of declarative pipeline jobs and manually clicked in Jenkins-jobs.
We only want to run one integration test build per node at a time. However, we want to run as many deliveries (A) and code quality (B) builds as there are executors.
Until now, the Throttle concurrent builds (https://github.com/jenkinsci/throttle-concurrent-builds-plugin) plugin has satisfied our needs. However, this plugin does not support declarative pipeline builds, nor does it at all seem updated.
The Lockable resources plugin (https://github.com/jenkinsci/lockable-resources-plugin) seems promising, but we haven't found any way to lock the entire build with a resource-name dynamically set. That is, when we start a C build, we want it to lock "resource_{name of server}".
This plugin allows setting an entire-build-lock in the options directive,
× ut we haven't figured out how to evaluate an environment variable there.
Any suggestions would be highly appreciated!
So the workaround on our side was to rewrite the pipeline script from declarative to scripted syntax. Then, the throttle concurrent builds-plugin works as a charm.
When the bug https://issues.jenkins-ci.org/browse/JENKINS-49173 is fixed, the plugin will work with declarative pipelines as well.
Currently I have one big job for a big C++ project, which does everything, compiling, running unit tests, coverage, release binaries and creating docs.
As the job takes 40 minutes I would like to split the job in different smaller ones.
I want to use the following approach:
main job every 15 minutes, which checks out the SCM, compiles the Debug configuration and runs basic unit tests
Several jobs for code analysis, coverage, integration tests, compiling Release builds and deployment to our application server running once per night, if the main job and each previous job were successful
I need the SVN revision, the build number and the workspace of the main job in all following jobs.
So far I was unable to achieve this.
The Parameterize Trigger plugin doesn't support triggers only once a day, the Build Trigger plugin doesn't support parameters, the built-in trigger also didn't work.
I understand that pipelines would probably make my approach easier, but e.g. my used CMake plugin won't support pipelines in a while.
Any other ideas or solutions?
You can just configure a job with parameters (https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Build) as post build job, for all your downstream jobs and this plugin:
https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Trigger+Plugin.
As parameter you can pass any var you need like buildNr and workspace.
Or just have a look at Jenkins Pipeline.
In my project I have more than 20 test cases. I previously used the bamboo test streams to run this test cases parallel. When moving to the jenkins, how can I divide these test cases to several streams in order to minimize the time.
I think that the Build Flow plugin and Build Flow Test Aggregator plugin can do what you want.
The Build Flow plugin supports running jobs in parallel. It could schedule your "child" job to run in parallel with different parameters.
The Build Flow Test Aggregator grabs test results from the scheduled builds of a Build Flow job, so your "child" job will need to publish its own test results.
You will need to configure your "child" job so that it can run in parallel by checking the "Execute concurrent builds if necessary" in the job configuration.
Whatever set of slaves provide the connection to the embedded devices will need enough executors to run your jobs in parallel.
We have set up a Jenkins instance as a remote testing resource for our developers. Every time a tag is created matching our refspec a job is triggered and the results emailed to the developer.
A job is defined as follows:
1 phase consisting of three jobs (frontend tests, integration tests,
unit tests)
All subjobs are executed, irrespective of success
Email the developer the test results
This setup mostly works except for two issues:
I cannot get the job to run in parallel. The subjobs run in
parallel, but only one instance of the job runs at a time. Is this
something I can configure differently somewhere, or is this inherent
in the way the plugin works?
The main job checks out and occupies one of our build servers for
the duration of the job. Is there a way to do git polling and then
just grab the hashref and release the build server on which the
polling was done before continuing building the subjobs?
In the multi job plugin, everything runs in parallel that is listed in the same "Phase", however the multijob itself needs somewhere to run. If you have a build followed by a test phase, you can add a "Build Phase" prior to the test phase, and only that phase will require a "build server".
There is an option called "Execute concurrent builds if necessary" that will allow multiple jobs of the same name to run simultaneously. This option must be set for the parent job and the subjobs as the default behavior of Jenkins is to only allow one build of a Project (job) to run at a time. Beware: Read the comments as this may have unintended side effects.
Not clear what you mean about polling however if using git, you may want to use webhooks so that pushes to the git repository directly invoke Jenkins. No need to poll.
I have a couple of unit testing / BDD jobs on our Jenkins instance that trigger a bunch of processes as they run. I have multiple Windows slaves, any one of which can run my tests.
After the text execution is complete, irrespective of the build status is passed/failed/unstable, I want to run "taskkill" and kill a couple of processes.
I had been doing that earlier by triggering a "Test_Janitor" downstream job - but this approach doesn't work anymore since I added more than one slave.
How can I either run the downstream job on the same slave as the upstream, or have some sort of a post build step to run "taskkill".
You can install the Post Build Task plugin to call a batch script on the slave (when your UT/BDD are completed).
The other solution is to call a downstream job and to pass the %NODE_NAME% variable to this job with the Parameterized Trigger plugin.
Next, you can use psexec to kill the processes on the relevant node.