I need to create a parallel pipeline that has the following steps:
Execute the Integration Tests;
Generate the Integration Tests HTML Report;
Publish the Integration Tests HTML Report on Jenkins;
Generate the Integration Tests HTML Coverage Report;
Publish the Integration Tests HTML Coverage Report on Jenkins;
The same steps for integration tests also should be done to mutation tests;
Deploy the application (jar file) to a pre-configured staging server (Tomcat Server instance);
Perform a automatic smoke test that will consist in performing a curl to check if the base URL of the application is responsive after deployment;
A UI Acceptance Manual Test will be performed in the following way. A user will be notified of the successful execution of all the previous tests and be asked to perform a manual test. In order to cancel the progression or proceed, a UI Acceptance Manual Test must take place. The pipeline should wait for a user manual confirmation on Jenkins;
A tag shall be pushed to my SCM ( Source Control Management) repository with the Jenkins build number and status.
For now I only a initial design of what I want my pipeline to be like.
I took the decision to generate and publish the Javadoc in parallel with the mutation and integration tests since these tests don't need the Javadoc to be done.
I think that I can parallelize my pipeline more, what do you guys think and what's your opinion on my desing?
I think your Pipeline is already well-optimized. IMHO trying to parallize it further will not yeild better performance, rather will add more complexity to the Pipeline.
Related
I've been asked to create a CI pipeline for a project at my work, I'm creating a load test with JMeter and Taurus so I plan to integrate it with Jenkins to build all the pipeline. I'm just starting on this field and a question that came to my mind is:
What happens to all the data created by the Load Test? does it goes to the deploy phase or it gets deleted once the test is done, should I clean after the tests end?
The data is being kept in the Jenkins workspace and by default it will be kept in the file system forever.
If you decide to publish the artifacts they will be available at Jenkins build dashboard via the web interface.
You might also be interested in Jenkins Performance Plugin which allows plotting performance trend charts and conditionally marking builds as unstable or failed depending on pass/fail thresholds
Example configuration can be found in the How to Run a Taurus Test through Jenkins Pipelines article
I am not completely familiar with your setup but as far as I can see from a quick research, JMeter does the same as every other testing framework and generates HTML reports. Jenkins wont delete them, unless you explicitly delete them (rm file.html) or call cleanWs (clean workspace). If the job is deleted so are the files.
So the test result file should still be present in the deploy phase. You can use a plugin to collect the result. Or just archive it. Or do whatever fits your workflow.
There is generally no need to clean it up (you usually configure Jenkins to delete old builds which takes care of that)
Currently I have one big job for a big C++ project, which does everything, compiling, running unit tests, coverage, release binaries and creating docs.
As the job takes 40 minutes I would like to split the job in different smaller ones.
I want to use the following approach:
main job every 15 minutes, which checks out the SCM, compiles the Debug configuration and runs basic unit tests
Several jobs for code analysis, coverage, integration tests, compiling Release builds and deployment to our application server running once per night, if the main job and each previous job were successful
I need the SVN revision, the build number and the workspace of the main job in all following jobs.
So far I was unable to achieve this.
The Parameterize Trigger plugin doesn't support triggers only once a day, the Build Trigger plugin doesn't support parameters, the built-in trigger also didn't work.
I understand that pipelines would probably make my approach easier, but e.g. my used CMake plugin won't support pipelines in a while.
Any other ideas or solutions?
You can just configure a job with parameters (https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Build) as post build job, for all your downstream jobs and this plugin:
https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Trigger+Plugin.
As parameter you can pass any var you need like buildNr and workspace.
Or just have a look at Jenkins Pipeline.
We have set up a Jenkins instance as a remote testing resource for our developers. Every time a tag is created matching our refspec a job is triggered and the results emailed to the developer.
A job is defined as follows:
1 phase consisting of three jobs (frontend tests, integration tests,
unit tests)
All subjobs are executed, irrespective of success
Email the developer the test results
This setup mostly works except for two issues:
I cannot get the job to run in parallel. The subjobs run in
parallel, but only one instance of the job runs at a time. Is this
something I can configure differently somewhere, or is this inherent
in the way the plugin works?
The main job checks out and occupies one of our build servers for
the duration of the job. Is there a way to do git polling and then
just grab the hashref and release the build server on which the
polling was done before continuing building the subjobs?
In the multi job plugin, everything runs in parallel that is listed in the same "Phase", however the multijob itself needs somewhere to run. If you have a build followed by a test phase, you can add a "Build Phase" prior to the test phase, and only that phase will require a "build server".
There is an option called "Execute concurrent builds if necessary" that will allow multiple jobs of the same name to run simultaneously. This option must be set for the parent job and the subjobs as the default behavior of Jenkins is to only allow one build of a Project (job) to run at a time. Beware: Read the comments as this may have unintended side effects.
Not clear what you mean about polling however if using git, you may want to use webhooks so that pushes to the git repository directly invoke Jenkins. No need to poll.
I'm trying to get a bunch of MSpec tests to run on multiple cores in TFS 2013. It doesn't appear to do it out of the box. It can run MSpec, but only in sequence and it takes a over an hour.
I am following this guide, but in step 4 he says replace the Foreach Xaml element with ParallelForEach to get the tests to run in parallel. I downloaded the default build template in TFS 2013. It is a lot simpler, but it doesn't have this tag.
It has:
<mtba:RunAgileTestRunner
DisplayName="Run VS Test Runner"
Enabled="[Not AdvancedTestSettings.GetValue(Of Boolean("DisableTests", false)]"
TestSpecs="[AutomatedTests]"
ConfigurationsToTest="[ConfigurationsToBuild]" />
The default MSpec test runner cannot run tests in parallel. That's why you see the reimplementation of a parallel test runner.
I doubt that TFS is implementing an MSpec test runner from the framework source code (although that would be possible). That parallel test runner is using internal classes, like ISpecificationRunner, and running them in parallel.
Your only options, if you must stick with MSpec and TFS, are
Split your tests into multiple projects/assemblies and feed them to a TFS parallel task that shell-executes the default test runner
Use a TFS shell-execute task to run your tests through the parallel runner
I am assuming that if you want to run tests in parallel they are integration tests that take a long time to run.
If that is the cas then you should move all non Unit Tests out of the build and push them further down the pipeline.
http://nakedalm.com/execute-tests-release-management-visual-studio-2013/
You can use Release Management to deploy your application and run your integration tests. Here you can run a larger number of long running tests without locking your build servers.
I am using Jenkins for integration testing.
Just to give the context. At the moment I have a separate build server which produces the build daily and Jenkins is not used as the build server. The build server executes the unit testing in my case.
When build process is complete it invokes the Jenkins job. In that job Jenkins start to deploy the build into the Virtual machine. I have a script for doing this.
Followed to that my plan is to run several scripts for doing the end-to-end testing.
Now I have several question in this regard:
How to parallelize the execution of the end-to-end tests?
As I am adding scripts after script I am getting worried how manageable it will be?
I am always using the web interface for adding and changing the scripts. How to do this from the command line?
Any ideas for a good tutorial? Any pointers from all of you? Thanks!
Looks like Build Flow Plugin is what I need.
https://github.com/jenkinsci/build-flow-plugin
You might want to try and see if you can use the Build Pipeline plugin before build flow. Much better visualization of what is going on, less scripting.
I link Build and deploy jobs in one sequence and then have unit and integration test jobs linked separately off the build job. You can then use Fail The Build plugin to have downstream jobs fail upstream ones.