I have a job in Jenkins that it testing nunit tests for a project. The Jenkins job fails, although all the unit tests pass.
So on Jenkins it says the build fails - but test results show no failures.
I cannot seem to figure out what is causing the job to fail. Is there some sort of way to see what causes a Jenkins job to be marked as fail? i.e. a detailed log file for a job or something? Any suggestions would be much appreciated.
Have you checked the Console Output for the failed job?
That said, errors in the Console Output can be hard to find, and then harder to understand. Sometimes I need to log in/remote to the build machine and build the solution, or run the unit tests, manually to see the error in an uncluttered, non-abstracted way (i.e., in the VisualStudio IDE or the NUnit GUI).
Oh, and the Log Parser Plugin makes finding errors in Jenkins much easier.
Related
For GUI testautomation we have set up a Maven project using JBehave (v4.0.4) as a BDD framework. The stories are executed with a JUnitReportingRunner and run continously in a Jenkins (v1.630) master slave environment.
I recently noticed that in some cases, the Jenkins build is marked as successful despite some failed steps. The XUnit test report correctly indicates that there is a failed test while the Jenkins build does not. We haven't configured any thresholds concerning the build status. Therefore one failed test should cause the build to fail (and it does most of the time).
This problem is very annoying since we heavily rely on the Jenkins mail notifications. Any pointers on how to solve it are very much appreciated.
I go my first steps with the jenkins workflow plugin and its really hard to see any problems with batch calls. If I ran an batch in a "normal" freestyle jenkins job I see all the output that produced from the batch. But with the workflow plugin all the output from the batch calls are hidden.
How can I enable the jenkins workflow plugin to show the output of a batch call?
Well this test shows that output is expected (of course). Certainly sounds like a bug. Is there a minimal reproducible test case? If so, file it.
I am using Jenkins for integration testing.
Just to give the context. At the moment I have a separate build server which produces the build daily and Jenkins is not used as the build server. The build server executes the unit testing in my case.
When build process is complete it invokes the Jenkins job. In that job Jenkins start to deploy the build into the Virtual machine. I have a script for doing this.
Followed to that my plan is to run several scripts for doing the end-to-end testing.
Now I have several question in this regard:
How to parallelize the execution of the end-to-end tests?
As I am adding scripts after script I am getting worried how manageable it will be?
I am always using the web interface for adding and changing the scripts. How to do this from the command line?
Any ideas for a good tutorial? Any pointers from all of you? Thanks!
Looks like Build Flow Plugin is what I need.
https://github.com/jenkinsci/build-flow-plugin
You might want to try and see if you can use the Build Pipeline plugin before build flow. Much better visualization of what is going on, less scripting.
I link Build and deploy jobs in one sequence and then have unit and integration test jobs linked separately off the build job. You can then use Fail The Build plugin to have downstream jobs fail upstream ones.
I have a tool in my build process that is very tolerant towards errors and warnings. It will often only log them to the console but not make the build fail. Currently, I am using that tool in the form of an Ant task, but that may change.
I would like the errors and warnings to make the build fail. Is there any way to do this? Can I maybe monitor the console output somehow and make the build fail, if appropriate?
(Just in case you are interested, the tool is Sonar.)
The plugin Post build task looks like a good solution for you.
You can configure it with a regex to check in the build logs and then launch a script that can result in a build failure (using the "Escalate script execution status to job status" property)
I've a simple Jenkins job where i runt python script to run a bunch of tests. I see the build as UNSTABLE even i see no obvious errors in the Jenkins job as well as no tests failing.
Whats i'm running is python
I'm not using any external test framework like nose. Its just a python script.
"Unstable" is the result Jenkins assigned when tests are failing. The only other way to get "Unstable" result is through the use of some post-build plugins. If you don't have any other plugins that are setting the build as unstable, then you have to take a closer look at all the tests.
If you paste the console output here, that would help as well