What is the concept in jenkins act as bamboo test streams - jenkins

In my project I have more than 20 test cases. I previously used the bamboo test streams to run this test cases parallel. When moving to the jenkins, how can I divide these test cases to several streams in order to minimize the time.

I think that the Build Flow plugin and Build Flow Test Aggregator plugin can do what you want.
The Build Flow plugin supports running jobs in parallel. It could schedule your "child" job to run in parallel with different parameters.
The Build Flow Test Aggregator grabs test results from the scheduled builds of a Build Flow job, so your "child" job will need to publish its own test results.
You will need to configure your "child" job so that it can run in parallel by checking the "Execute concurrent builds if necessary" in the job configuration.
Whatever set of slaves provide the connection to the embedded devices will need enough executors to run your jobs in parallel.

Related

Stress testing jenkins master using jmeter

I am trying to stress test my jenkins infrastructure using jmeter. I have created a Jmeter TestPlan which uses HTTPRequest component of jmeter to trigger the jenkins builds using jenkins rest api. The idea is to trigger a large number of builds and monitor the System health. when I run the jmeter test plan for single thread it works fine, but when I run it with multiple threads each HTTPrequest to trigger the jenkins build should be run for each thread... but it runs only once i.e. each build is triggered only once on jenkins (no matter what is the thread count). In Jmeter test results, it shows that the HTTPRequest is successful for all the threads.. but on Jenkins the build seems to be triggered only for 1 thread group.
Well-behaved JMeter test must represent real system usage, if you want to simulate user clicking Jenkins "Build Now" button you need to send request like:
http://jenkins_host:port/job/jobname/build?delay=0sec
this delay=0sec parameter is uber important as if you don't have it only first request will trigger the job, with this parameter you will have either as many concurrent jobs as available executors:
If there are not enough executors to serve all the jobs, the jobs will be put into queue
You can use JMeter PerfMon Plugin for monitoring Jenkins node health (CPU, RAM, JVM metrics, etc.)

Jenkins, Multijob, how to run in parallel?

We have set up a Jenkins instance as a remote testing resource for our developers. Every time a tag is created matching our refspec a job is triggered and the results emailed to the developer.
A job is defined as follows:
1 phase consisting of three jobs (frontend tests, integration tests,
unit tests)
All subjobs are executed, irrespective of success
Email the developer the test results
This setup mostly works except for two issues:
I cannot get the job to run in parallel. The subjobs run in
parallel, but only one instance of the job runs at a time. Is this
something I can configure differently somewhere, or is this inherent
in the way the plugin works?
The main job checks out and occupies one of our build servers for
the duration of the job. Is there a way to do git polling and then
just grab the hashref and release the build server on which the
polling was done before continuing building the subjobs?
In the multi job plugin, everything runs in parallel that is listed in the same "Phase", however the multijob itself needs somewhere to run. If you have a build followed by a test phase, you can add a "Build Phase" prior to the test phase, and only that phase will require a "build server".
There is an option called "Execute concurrent builds if necessary" that will allow multiple jobs of the same name to run simultaneously. This option must be set for the parent job and the subjobs as the default behavior of Jenkins is to only allow one build of a Project (job) to run at a time. Beware: Read the comments as this may have unintended side effects.
Not clear what you mean about polling however if using git, you may want to use webhooks so that pushes to the git repository directly invoke Jenkins. No need to poll.

Jenkins - cleanup after job

I have a couple of unit testing / BDD jobs on our Jenkins instance that trigger a bunch of processes as they run. I have multiple Windows slaves, any one of which can run my tests.
After the text execution is complete, irrespective of the build status is passed/failed/unstable, I want to run "taskkill" and kill a couple of processes.
I had been doing that earlier by triggering a "Test_Janitor" downstream job - but this approach doesn't work anymore since I added more than one slave.
How can I either run the downstream job on the same slave as the upstream, or have some sort of a post build step to run "taskkill".
You can install the Post Build Task plugin to call a batch script on the slave (when your UT/BDD are completed).
The other solution is to call a downstream job and to pass the %NODE_NAME% variable to this job with the Parameterized Trigger plugin.
Next, you can use psexec to kill the processes on the relevant node.

How can I ensure that only one if a kind of Jenkins job is run?

I have several integration tests within my Jenkins jobs. They run on several application servers, and I want to make sure that only one integration test job is run at the same time on one application server.
I would need something like a tag or variable within my jobs which create a group of jobs and then configure the logic that within that group, only one job may run at the same time.
Could I use the Exclusion plugin for that? Does anyone have experience with it?
Use the Throttle Concurrent Builds Plugin. It replaces the Locks and Latches plugin, and provides the capability to restrict the number of jobs running for specific labels.
For example: you create a project category 'Integration Test Server A' and tie jobs to it with a maximum concurrent count of 1, and a second 'Integration Test Server B' label and tie other jobs to it, both categories will only run a single concurrent build (assuming you've set a max job count of 1), and the other jobs in that category will queue until the 'lock' has cleared.
Using this method, you don't have to restrict the number of executors available on any specific Jenkins instance, and can easily add further slaves in the future without having to reconfigure all your jobs.
If I understand you right, you have a pool of application servers and it doesn't matter on what server your tests run. They only need to be the only test on that server.
I haven't seen a plugin that can do that. However, you can get easily around it. You need to configure a slave for each application server. (1 slave = 1 app server) You need to assign the same label to all slaves and every slave can only have one executor. Then you assign the jobs that run the integration tests, to run on that label. Jenkins will assign the jobs then to the next available slave (or node) that has that label.
Bare in mind that you can have more than one slave running on the same piece of hardware and even a master and a slave can coexist on the same server.
Did you check below parameter in the Jenkins -> Manage Jenkins -> Configure system
# of executors
The above parameter helps you restrict the number of jobs to be executed at a time.
A Jenkins executor is one of the basic building blocks which allow a build to run on a node/agent (e.g. build server). Think of an executor as a single “process ID”, or as the basic unit of resource that Jenkins executes on your machine to run a build. Please see Jenkins Terminology for more details regarding executors, nodes/agents, as well as other foundational pieces of Jenkins.
You can find information on how to set the number of Jenkins executors for a given agent on the Remoting Best Practices page, section Number of executors.
Source - https://support.cloudbees.com/hc/en-us/articles/216456477-What-is-a-Jenkins-Executor-and-how-can-I-best-utilize-my-executors

How to configure jenkins multi-configuration build and test

I need to build and test on multiple configurations: linux, osx and
solaris. I have slave nodes labeled "linux", "osx" and "solaris". On
each configuration, I want to (a) build (b) run smoke tests
(c) if smoke tests pass, then run full tests, and perhaps more.
I thought that multi-configuration jobs might be the answer, so I setup a
multi-configuration build job and it starts concurrent builds on each
OS. The build job will trigger a downstream smoke-test build, which, in
turn, triggers the full-test job.
I've run into the following issues
If one of the configurations fails, the job as a whole fails, and
Jenkins will not fire any downstream jobs (e.g., if the solaris build
fails, Jenkins will not run smoke tests or full tests for osx and
linux).
The solaris build takes about twice as long as the others (on the
order of an hour), and I'd prefer the linux and osx smoke tests not
wait for the solaris build to finish.
Does that mean I'm left with hand-crafting three pipelines of jobs, and
putting them behind a "start-all" job (i.e., creating and hand-chaining
the following jobs)?
build-linux smoke-test-linux full-test-linux
build-osx smoke-test-osx full-test-osx
build-solaris smoke-test-solaris full-test-solaris
Did I miss something obvious?
As far as I know the answer is to create 3 matrix jobs, one for each system. They then would have 3 subjobs (build, smoke-test, fulltest) with the build-job as a touchstone.
Have you thought about combining the build, smoke-test and full tests into a single multi-configuration job? Other than being a little messy, this should work for you.
To answer your first issue: to trigger a downstream job regardless of result, use trigger parameterized build to run when complete (always trigger) and then check "build w/o parameters"
To answer your second issue: either use an all encompassing multi-configuration (matrix) job or use three separate job streams as you mentioned. UPDATE: you could run 3 sequential matrix jobs for each step (build, smoke-test, full tests) but it would mean that if any of the build steps failed then none of the smoke-tests would be run.

Resources