Bamboo Nunit parser task incorrectly parses the results from nunit3-console.exe (25 tests were quarantined) - xml-parsing

In Bamboo plan I have script-task, where script body is:
#echo off
SET nucpath=%1
SET projectvar=%2
SET xmlvar=%3
CALL SET xmlvar=%%xmlvar:-xml=--result%%
SET outputvar=%4;format=nunit2
SHIFT
SHIFT
SHIFT
SET remvar=%2
:loop
SHIFT
if [%1]==[] GOTO afterloop
SET remvar=%remvar% %2
GOTO loop
:afterloop
REM Ensure PATH includes nunit3-console.exe or edit the line below to include full path.
%nucpath% %projectvar% %xmlvar% %outputvar% %remvar%
with arguments:
${bamboo.build.working.directory}\src\packages\NUnit.ConsoleRunner.3.5.0\tools\nunit3-console.exe"
"${bamboo.build.working.directory}\src\CutwiseSeleniumTests\CutwiseSeleniumTests.csproj"
"TestResult.xml"
"Debug"
This task work perfectly, after it I receive correct TestResult.xml file.
But in the next final task - Nunit Parser I get another wrong result, it looks like Nunit Task doesn't work properly despite parameter "format=nunit2" in script running nunit3-console.exe.
The problem is that Nunit parser defined 25 tests as skipped Test Results
But in TestResult.xml I see next Test summary:
17-Oct-2016 16:41:01 Test Run Summary
17-Oct-2016 16:41:01 Overall result: Failed
17-Oct-2016 16:41:01 Test Count: 45, Passed: 35, Failed: 1, Inconclusive: 0, Skipped: 9
17-Oct-2016 16:41:01 Failed Tests - Failures: 0, Errors: 1, Invalid: 0
17-Oct-2016 16:41:01 Skipped Tests - Ignored: 9, Explicit: 0, Other: 0
17-Oct-2016 16:41:01 Start time: 2016-10-17 13:35:48Z
17-Oct-2016 16:41:01 End time: 2016-10-17 13:41:01Z
17-Oct-2016 16:41:01 Duration: 313.298 seconds
Here is my TestResult.xml TestResult.xml
What could be the problem, how to solve it?

As Charlie comments, there was a nunit3 format of result instead of nunit2. I change script to
#echo on
SET nucpath=%1
SET projectvar=%2
SET xmlvar=%3
CALL SET xmlvar=%%xmlvar:-xml=--result%%
SET outputvar=%4;format=nunit2
SHIFT
SHIFT
SHIFT
SET remvar=%2
:loop
SHIFT
if [%2]==[] GOTO afterloop
SET remvar=%remvar% %2
GOTO loop
:afterloop
REM Ensure PATH includes nunit4-console.exe or edit the line below to include full path.
%nucpath% %projectvar% %xmlvar% %outputvar% %remvar%
And I pass Bamboo's arguments as
"${bamboo.build.working.directory}\src\packages\NUnit.ConsoleRunner.3.5.0\tools\nunit3-console.exe", "${bamboo.build.working.directory}\src\CutwiseSeleniumTests\CutwiseSeleniumTests.csproj", -xml="TestResult.xml", --config="Debug"

Related

how to run more than one test from command line using nunit console runner?

I have list of failed tests (belongs to different classes, namespaces), when I try to run from Jenkins or local command line with below syntax seeing
.\packages\NUnit.ConsoleRunner.3.10.0\tools\nunit3-console.exe --where test==failedTest1,failedTest2,failedTest3 .\ABCTesting\bin\Debug\Api.IntegrationTesting.dll
output
Run Settings
DisposeRunners: True
WorkDirectory: C:\API_IntegrationTesting
ImageRuntimeVersion: 4.0.30319
ImageTargetFrameworkName: .NETFramework,Version=v4.7.2
ImageRequiresX86: False
ImageRequiresDefaultAppDomainAssemblyResolver: False
NumberOfTestWorkers: 12
Test Run Summary
Overall result: Passed
Test Count: 0, Passed: 0, Failed: 0, Warnings: 0, Inconclusive: 0, Skipped: 0
Start time: 2021-06-10 20:43:49Z
End time: 2021-06-10 20:43:52Z
Duration: 2.574 seconds
There are two basic problems in the syntax of your where clause...
The comma (',') has no function in a where clause. Consequently, you are telling NUnit to look for a test named "failedTest1,failedTest2,failedTest3". Most likely <g> you don't have a test by that name.
The test operand is specified as the FullName of the test, i.e. the namespace and actual test name.
So the correct syntax in your example could be...
--where "test==My.Namespace.failedTest1 || test==My.Namespace.failedTest2 || test==My.Namespace.failedTest3"
As an alternative approach, you may want to consider using the --testlist option, which allows you to place the test names in a text file, one full name per line.

How to call builds in parallel with delay in Jenkins

I have a Jenkins parallel build issue. I have a large configuration options blob, and I want to call the same build job repeatedly with changes to the config blob. I also want to delay a little bit between each run. Also, the number of calls is based off selections so could be anywhere from 1 job to say 7 jobs at once hence building it grammatically. Given a much paired down version of the code below assuming 'opts' are what is coming in from selection, could someone help me accomplish this. The long job has a timestamp inside it and so we don't want them all kicking off at the exact same moment.
Without the 'sleep' I will see "Starting building: myLongJob#100", then 101, then 102 etc.. However there is no delay between the jobs. With the sleep I see "Starting building: myLongJob #100" then 100, then 100 again.
Other than the given code, I tried adding quietPeriod: diffnum+1 but that either did nothing or waited until the long job finished. I also tried adding wait: true (and wait: false). Still I see no delay or the same job build number repeatedly.
cfgOtps = """
server: MYOPTION
prefix: Testing
"""
def opts = ["Windows", "Linux", "Mac"]
// function will create a list of cfg blobs with server replaced by 'opt' so in this example
// cfgs will be length of 3 with server beign Windows, Linux, and Mac respectively
def cfgs = generate_cfg_list(cfgOtps, opts)
//update so each entry in branches has a different key
def branchnum = 0
def branches = [:]
//variable to have different inside the closure
def diffnum = -1
def runco = ""
cfgs.each {
branchnum += 1
branches["branch${branchnum}"] = {
diffnum += 1
//sleep(diffnum+5)
def deployResult = build job: 'myLongJob', parameters: [[$class: 'TextParameterValue', name: 'ConfigObj', value: cfgs[diffnum]],
string(name:'dummy', value: "${diffnum}")]
}
}
parallel branches
I expected the output to be something like the following with a short delay between them.
Starting building: myLongJob #100
Starting building: myLongJob #101
Starting building: myLongJob #102
Which I do get if I do not have the sleep. The issue there is the long job needs to not be run at the same time so sometimes overwrites things.
Adding the sleep results in the following
Starting building: myLongJob #100
Starting building: myLongJob #100
Starting building: myLongJob #100
or maybe 2 with the same number and one a different build number. Not sure why a sleep would induce that behavior?

TFS 2017 How do I know which test is being run (before it finishes)?

I have a TFS 2017 (version 15.105.25910.0) build which also runs tests, but one test is taking a very long time and the whole build is cancelled due to a timeout set in the 'general' tab of the build edit page. TFS log is included below. How can I check which test is faulty?
Notice the time difference between the first and second log. I assume a faulty test is being run after ReportAnalyzer_Blabla_SomethingTest, but with over 1k tests it's hard to guess which it is.
2017-08-30T11:30:09.7614471Z Passed ReportAnalyzer_Blabla_SomethingTest
2017-08-30T11:53:52.1581687Z ##[debug]FindFiles.FindMatchingFiles(rootFolder = D:\TfsBuildAgents\RmsBuild\_work\8\s\TestResults, matchPattern = *.trx, includeFiles = True, includeFolders = False
2017-08-30T11:53:52.1581687Z ##[debug]FindFiles.GetMatchingItems(includePatterns.Count = 1, excludePatterns.Count = 0, includeFiles = True, includeFolders = False
2017-08-30T11:53:52.1581687Z ##[debug]FindFiles.FindMatchingFiles - Found 0 matches
2017-08-30T11:53:52.1581687Z ##[debug]Processed: ##vso[task.logissue type=warning;code=002003;]
2017-08-30T11:53:52.1581687Z
2017-08-30T11:53:52.1581687Z
2017-08-30T11:53:52.1737949Z ##[warning]No results found to publish.
2017-08-30T11:53:52.1737949Z ##[debug]Processed: ##vso[task.logissue type=warning]No results found to publish.
2017-08-30T11:53:52.2050485Z ##[error]The operation was canceled.
2017-08-30T11:53:52.2050485Z ##[debug]System.OperationCanceledException: The operation was canceled.
Normally the faulty test should be the first test after ReportAnalyzer_Blabla_SomethingTest. But as you said, it with over 1k tests, according to the log you posted, if you didn't split the tests we can not exactly identify which test is the faulty one. In this case, I'm afraid that you have to debug that one by one.
So, yo can try to split the tests then debug them accordingly.
You can also try to check if there are any other detailed logs to track that.
See Review continuous test results after a build for more information.
I've found a messy workaround which helped me find the failing test. In all test classes (the messy part) I've added a code which appends the currently running unit test name to a file - the last entry was what I was interested in.
[ClassInitialize]
public static void ClassInitialzie(TestContext testContext)
{
// This is just an example!
File.AppendAllText("testRunLog.txt", testContext.TestName + Environment.NewLine);
}
The closes thing to "run the code before each test in the whole test project" seems to be the ClassInitialize attribute.
https://msdn.microsoft.com/en-us/library/microsoft.visualstudio.testtools.unittesting.classinitializeattribute.aspx

How to get Integration Test Results with additional output like number of: total tests, failed tests etc in Sonarqube?

How to get Integration Test Results with additional output like number of: total tests, failed tests etc in Sonarqube?
My setup has jenkins integrated to sonarqube and I'm looking to get sonarqube display, the following results:
Number of Tests
Number of Tests Passed
Number of Test Failed
Number of Test Skipped
Total Execution time
As of now the Integration Test Results widget gives me only Coverage Results Output.
if you have tests unit and/or It, they sould be 'upload' on your sonarqube project dashboard. ( as the same way you should see then on your jenkins project dashboard )
Then, in 'Mesures' menu, at 'coverage' tab, there is a lots of displayed informations like nb test in failure, nb tests success, total exec time, nb tests skipped ...

Debug Delays in Visual Studio Test Task

We're using Team Foundation Server 2015 Update 2 on-premise. The Visual Studio Test task takes about 30 seconds to publish the test results after it is run.
Small unit test project:
2016-05-02T01:02:56.9641774Z Attachments:
2016-05-02T01:02:56.9641774Z C:\Agent1\_work\9\TestResults\eb650e78-ddfa-4116-af15-9847b5cc2632\TFSBUILD_BuildAgent 2016-05-02 03_02_23.coverage
2016-05-02T01:02:56.9641774Z Total tests: 316. Passed: 316. Failed: 0. Skipped: 0.
2016-05-02T01:02:56.9641774Z Test Run Successful.
2016-05-02T01:02:56.9641774Z Test execution time: 35,1251 Seconds
2016-05-02T01:02:57.1048030Z Results File: C:\Agent1\_work\9\TestResults\TFSBUILD_BuildAgent 2016-05-02 03_02_31.trx
2016-05-02T01:03:26.6662691Z Publishing Test Results...
2016-05-02T01:03:31.2109274Z Test results remaining: 316
2016-05-02T01:03:37.6228586Z Published Test Run : http://<tfs server>:8080/tfs/DefaultCollection/Project/_TestManagement/Runs#runId=52024&_a=runCharts
As you can see after finishing all tests and writing the results file there is a 30 second stop before "Publishing Test Results..." even appears. Then it takes another 11 seconds to upload a few kb over the local network.
In the _diag folder I find the following entries in the corresponding log file (of a newer build, but everything else is identical):
06:48:13.171983 BaseLogger.LogConsoleMessage(scope.JobId = 5f7ff256-ef21-4150-86fc-678cdef40792, message = Results File: C:\Agent1\_work\9\TestResults\TFSBUILD_BuildAgent 2016-05-12 08_47_49.trx)
06:48:45.798627 FindFiles.FindMatchingFiles(rootFolder = C:\Agent1\_work\9\TestResults, matchPattern = *.trx, includeFiles = True, includeFolders = False
I'll assume that this is not working as intended, but how do I best debug such a problem?
To quote the TFS documentation:
"When you use these predefined reports or create your own reports, there is a time delay between the time that you save the test results and the time that the data is available in the warehouse database or the analysis services database in Team Foundation Server."
I think this might explain the problem you seem to have

Resources