Build and run unit-tests with two distinct jobs - jenkins

I have a basic setup: some source files stored in GitHub that are pulled and built by a Jenkins job.
Now I'd like to run the unit-tests automatically when the build is done (I'm using NUnit if it can help).
I could add another build step to the "build" job to run nunit-console but I'd like to separate the build task from the unit-testing task, so that in the Jenkins dashboard I can directly see what is broken: the build or "only" the tests.
I could create another job that would pull the code-source too but it would duplicate the first job.
What's the simplest way to run the unit-tests directly on the binaries produced by the first job (run second job in the same workspace? copy the binaries? ...) ?
Thanks for any input.

You could use the Copy Artifact Plugin to copy the artefacts to another job and then run the unit tests but this may not work, depending on how C# handles packaging and the project is structured.
It look like you can use the NUint Plugin to publish the results of your tests so you may be able to use a single job as I don't think that the tests will run if the previous build step fails as they don't for JUnit tests

Related

What happens to data on a CI pipeline

I've been asked to create a CI pipeline for a project at my work, I'm creating a load test with JMeter and Taurus so I plan to integrate it with Jenkins to build all the pipeline. I'm just starting on this field and a question that came to my mind is:
What happens to all the data created by the Load Test? does it goes to the deploy phase or it gets deleted once the test is done, should I clean after the tests end?
The data is being kept in the Jenkins workspace and by default it will be kept in the file system forever.
If you decide to publish the artifacts they will be available at Jenkins build dashboard via the web interface.
You might also be interested in Jenkins Performance Plugin which allows plotting performance trend charts and conditionally marking builds as unstable or failed depending on pass/fail thresholds
Example configuration can be found in the How to Run a Taurus Test through Jenkins Pipelines article
I am not completely familiar with your setup but as far as I can see from a quick research, JMeter does the same as every other testing framework and generates HTML reports. Jenkins wont delete them, unless you explicitly delete them (rm file.html) or call cleanWs (clean workspace). If the job is deleted so are the files.
So the test result file should still be present in the deploy phase. You can use a plugin to collect the result. Or just archive it. Or do whatever fits your workflow.
There is generally no need to clean it up (you usually configure Jenkins to delete old builds which takes care of that)

Run jenkins post build step on the slave node instead of Master

I have created a Jenkins job and am able to assign it to run on the master/slave using their label name in Restrict where this project can be run. My job needs to do this
Copy test data to a target folder (not Jenkins workspace)
Run the test
Summarize results
Cleanup the folder with data - Yet to be implemented
Regarding step4, I have to delete the data before marking the job as complete. I have considered a Conditional Build step and it looks to be working in all cases except when the job is aborted.
I am considering a Post Build step using PostBuildTask/GroovyPostBuild and it only works when the job is assigned to run on Master. The issue here is when I try to run the job on Slave1/Slave2, the same task doesn't seem to work and I realized that its being executed on Master instead of Slave1/2.
Would appreciate any guidance on how I can solve this issue.
Thanks
Yes, Post build steps run on Master by default. So, you need another plugin allow you to choose which node you want to run Post build step. In my system, I use "Flexible Publish" plugin that I see it can solve you issue
Flexible plugin example

what did jenkins actually build?

I created a freestyle project in jenkins, in which I chose source code management as git, screenshot below
That's pretty my config. The repo you see in there is public repo. then I save the config, then I click build now.
It seems to works base on the notification on screen, which says 'success'. But I have no idea I what the heck Jenkins produced. I didn't instruct what to build and how to build. How does it know what I want? And lets say it did build something, where does it store the build? I didn't instruct it where to store the built file either. Can someone explain what is going on?
To actually build something you need to add something to your Build section in the project configuration. For a javascript configuration it might look something like:
npm install
npm run test-coverage
npm run linter
npm run complexity
where each item after run is a script in your package.json. Then you can add plugins to read the outputs of those actions, for example:
Clover test coverage publisher
TAP (Test) results publisher
HTML Publisher for publishing static analysis results
Checkstyle publisher for linting results
This allows you to pass and fail builds based on certain test criteria and where continuous integration starts to shine.
In Jenkins job you have several sections - you can define pre build actions to prepare the environment, SCM to check out from source control, Build section to run your build pipeline and post build operation to run actions after the build section.
If you defined only the SCM section all your job did is to check out your sources from the source control you provided. the status of this action is SUCCESS.
Don't forget to check the console output of the job that ran to see which steps ran.

Parallelizing tests with Jenkins

I am using Jenkins for integration testing.
Just to give the context. At the moment I have a separate build server which produces the build daily and Jenkins is not used as the build server. The build server executes the unit testing in my case.
When build process is complete it invokes the Jenkins job. In that job Jenkins start to deploy the build into the Virtual machine. I have a script for doing this.
Followed to that my plan is to run several scripts for doing the end-to-end testing.
Now I have several question in this regard:
How to parallelize the execution of the end-to-end tests?
As I am adding scripts after script I am getting worried how manageable it will be?
I am always using the web interface for adding and changing the scripts. How to do this from the command line?
Any ideas for a good tutorial? Any pointers from all of you? Thanks!
Looks like Build Flow Plugin is what I need.
https://github.com/jenkinsci/build-flow-plugin
You might want to try and see if you can use the Build Pipeline plugin before build flow. Much better visualization of what is going on, less scripting.
I link Build and deploy jobs in one sequence and then have unit and integration test jobs linked separately off the build job. You can then use Fail The Build plugin to have downstream jobs fail upstream ones.

Jenkins - Running install tests on remote machine and reporting results back to Jenkins

I am looking to add some automated tests to run nightly on a project. Currently the project has a few jobs that create multiple builds of various components of the project.
The builds create rpm files, there are multiple jobs creating multiple rpms, I want to grab all of the rpms and install them and test them all under a single test job, there are lots of dependencies on each other. I can install via the command line but these rpms are stored on the Jenkins master machine.
This is as far as I have got;
I have set up the job in Jenkins
I have created a slave for the job to run on
I have used Jenkins to run a bash script on the slave (works)
What I want to do is the following;
At regular intervals (lets say once per day when I know builds have all completed) fetch the most recent passed builds of all the projects and copy them to the slave machine
Install the rpms using a script.
The script performs certain tests during the install (looking at logs etc...) so I want to collect these all and send the results back to Jenkins (may eventually perform other tests here too)
I want the status of the last build image to be determined by my own tests
I also want the test results, logs, etc... to be stored in the Jenkins test job so that we can view them and marvel at their awesomeness!
What I don't know how to do is;
How to copy the files to the slave? Should this be handled on the slave itself using wget or something, or does Jenkins have the functionality (plugin maybe) that handles this all for me?
How do I report my custom results back to the Jenkins job?
I only started working with Jenkins three days ago (the project and Jenkins build jobs are a lot older than that), apologies if I'm missing anything obvious.
UPDATE
I'm thinking a combination of these plugins might do the trick, I've not yet looked into these too much yet though.
Copy artifact plugin to copy the rpms from the latest stable builds of the other jobs
xUnit plugin to interpret some xml files that I generate during the test process
I didn't actually need any plugins for this. I simply set up the job to run on the slave, had a build step that ran some tests and generated an xml file (similar to the jUnit xml results) and then added a post build step to look at the jUnit results (even though the tests weren't jUnit tests).
This worked a charm and I have builds being marked as unstable if they fail tests that I specify, like did they install an rpm and did such and such happen.
I was able to get the latest stable builds as the latest stable builds are coppied to a file server anyway, any failed builds don't go there so that was simple.

Resources