Debug Delays in Visual Studio Test Task - tfs

We're using Team Foundation Server 2015 Update 2 on-premise. The Visual Studio Test task takes about 30 seconds to publish the test results after it is run.
Small unit test project:
2016-05-02T01:02:56.9641774Z Attachments:
2016-05-02T01:02:56.9641774Z C:\Agent1\_work\9\TestResults\eb650e78-ddfa-4116-af15-9847b5cc2632\TFSBUILD_BuildAgent 2016-05-02 03_02_23.coverage
2016-05-02T01:02:56.9641774Z Total tests: 316. Passed: 316. Failed: 0. Skipped: 0.
2016-05-02T01:02:56.9641774Z Test Run Successful.
2016-05-02T01:02:56.9641774Z Test execution time: 35,1251 Seconds
2016-05-02T01:02:57.1048030Z Results File: C:\Agent1\_work\9\TestResults\TFSBUILD_BuildAgent 2016-05-02 03_02_31.trx
2016-05-02T01:03:26.6662691Z Publishing Test Results...
2016-05-02T01:03:31.2109274Z Test results remaining: 316
2016-05-02T01:03:37.6228586Z Published Test Run : http://<tfs server>:8080/tfs/DefaultCollection/Project/_TestManagement/Runs#runId=52024&_a=runCharts
As you can see after finishing all tests and writing the results file there is a 30 second stop before "Publishing Test Results..." even appears. Then it takes another 11 seconds to upload a few kb over the local network.
In the _diag folder I find the following entries in the corresponding log file (of a newer build, but everything else is identical):
06:48:13.171983 BaseLogger.LogConsoleMessage(scope.JobId = 5f7ff256-ef21-4150-86fc-678cdef40792, message = Results File: C:\Agent1\_work\9\TestResults\TFSBUILD_BuildAgent 2016-05-12 08_47_49.trx)
06:48:45.798627 FindFiles.FindMatchingFiles(rootFolder = C:\Agent1\_work\9\TestResults, matchPattern = *.trx, includeFiles = True, includeFolders = False
I'll assume that this is not working as intended, but how do I best debug such a problem?

To quote the TFS documentation:
"When you use these predefined reports or create your own reports, there is a time delay between the time that you save the test results and the time that the data is available in the warehouse database or the analysis services database in Team Foundation Server."
I think this might explain the problem you seem to have

Related

Can I make flex template jobs take less than 10 minutes before they start to process data?

I am using terraform resource google_dataflow_flex_template_job to deploy a Dataflow flex template job.
resource "google_dataflow_flex_template_job" "streaming_beam" {
provider = google-beta
name = "streaming-beam"
container_spec_gcs_path = module.streaming_beam_flex_template_file[0].fully_qualified_path
parameters = {
"input_subscription" = google_pubsub_subscription.ratings[0].id
"output_table" = "${var.project}:beam_samples.streaming_beam_sql"
"service_account_email" = data.terraform_remote_state.state.outputs.sa.email
"network" = google_compute_network.network.name
"subnetwork" = "regions/${google_compute_subnetwork.subnet.region}/subnetworks/${google_compute_subnetwork.subnet.name}"
}
}
Its all working fine however without my requesting it the job seems to be using flexible resource scheduling (flexRS) mode, I say this because the job takes about ten minutes to start and during that time has state=QUEUED which I think is only applicable to flexRS jobs.
Using flexRS mode is fine for production scenarios however I'm currently still developing my dataflow job and when doing so flexRS is massively inconvenient because it takes about 10 minutes to see the effect of any changes I might make, no matter how small.
In Enabling FlexRS it is stated
To enable a FlexRS job, use the following pipeline option:
--flexRSGoal=COST_OPTIMIZED, where the cost-optimized goal means that the Dataflow service chooses any available discounted resources or
--flexRSGoal=SPEED_OPTIMIZED, where it optimizes for lower execution time.
I then found the following statement:
To turn on FlexRS, you must specify the value COST_OPTIMIZED to allow the Dataflow service to choose any available discounted resources.
at Specifying pipeline execution parameters > Setting other Cloud Dataflow pipeline options
I interpret that to mean that flexrs_goal=SPEED_OPTIMIZED will turn off flexRS mode. However, I changed the definition of my google_dataflow_flex_template_job resource to:
resource "google_dataflow_flex_template_job" "streaming_beam" {
provider = google-beta
name = "streaming-beam"
container_spec_gcs_path = module.streaming_beam_flex_template_file[0].fully_qualified_path
parameters = {
"input_subscription" = google_pubsub_subscription.ratings[0].id
"output_table" = "${var.project}:beam_samples.streaming_beam_sql"
"service_account_email" = data.terraform_remote_state.state.outputs.sa.email
"network" = google_compute_network.network.name
"subnetwork" = "regions/${google_compute_subnetwork.subnet.region}/subnetworks/${google_compute_subnetwork.subnet.name}"
"flexrs_goal" = "SPEED_OPTIMIZED"
}
}
(note the addition of "flexrs_goal" = "SPEED_OPTIMIZED") but it doesn't seem to make any difference. The Dataflow UI confirms I have set SPEED_OPTIMIZED:
but it still takes too long (9 minutes 46 seconds) for the job to start processing data, and it was in state=QUEUED for all that time:
2021-01-17 19:49:19.021 GMTStarting GCE instance, launcher-2021011711491611239867327455334861, to launch the template.
...
...
2021-01-17 19:59:05.381 GMTStarting 1 workers in europe-west1-d...
2021-01-17 19:59:12.256 GMTVM, launcher-2021011711491611239867327455334861, stopped.
I then tried explictly setting flexrs_goal=COST_OPTIMIZED just to see if it made any difference, but this only caused an error:
"The workflow could not be created. Causes: The workflow could not be
created due to misconfiguration. The experimental feature
flexible_resource_scheduling is not supported for streaming jobs.
Contact Google Cloud Support for further help. "
This makes sense. My job is indeed a streaming job and the documentation does indeed state that flexRS is only for batch jobs.
This page explains how to enable Flexible Resource Scheduling (FlexRS) for autoscaled batch pipelines in Dataflow.
https://cloud.google.com/dataflow/docs/guides/flexrs
This doesn't solve my problem though. As I said above if I deploy with flexrs_goal=SPEED_OPTIMIZED then still state=QUEUED for almost ten minutes, yet as far as I know QUEUED is only applicable to flexRS jobs:
Therefore, after you submit a FlexRS job, your job displays an ID and a Status of Queued
https://cloud.google.com/dataflow/docs/guides/flexrs#delayed_scheduling
Hence I'm very confused:
Why is my job getting queued even though it is not a flexRS job?
Why does it take nearly ten minutes for my job to start processing any data?
How can I speed up the time it takes for my job to start processing data so that I can get quicker feedback during development/testing?
UPDATE, I dug a bit more into the logs to find out what was going on during those 9minutes 46 seconds. These two consecutive log messages are 7 minutes 23 seconds apart:
2021-01-17 19:51:03.381 GMT
"INFO:apache_beam.runners.portability.stager:Executing command: ['/usr/local/bin/python', '-m', 'pip', 'download', '--dest', '/tmp/dataflow-requirements-cache', '-r', '/dataflow/template/requirements.txt', '--exists-action', 'i', '--no-binary', ':all:']"
2021-01-17 19:58:26.459 GMT
"INFO:apache_beam.runners.portability.stager:Downloading source distribution of the SDK from PyPi"
Whatever is going on between those two log records is the main contributor to the long time spent in state=QUEUED. Anyone know what might be the cause?
As mentioned in the existing answer you need to extract the apache-beam modules inside your requirements.txt:
RUN pip install -U apache-beam==<version>
RUN pip install -U -r ./requirements.txt
While developing, I prefer to use DirectRunner, for the fastest feedback.

TFS 2017 Test plan: Set test cases as Not applicable or Blocked from trx result file

After execution automated tests in VStest task, vstest.exe generate trx result file. This trx contains only 3 types of outcome: Passed, Failed and Skipped. This types of outcome publish into TFS test plan.
Is there any possibility to add "Not applicable" or "Blocked" outcome into trx and update test cases in TFS test plan, based on this outcome?
I believe custom logger should be used, but what type of outcome can I use for "Not applicable" or "Blocked"?
<Counters total="1" executed="0" passed="0" failed="0" error="0" timeout="0" aborted="0" inconclusive="0" passedButRunAborted="0" notRunnable="0" notExecuted="0" disconnected="0" warning="0" completed="0" inProgress="0" pending="0" />
There is 9 possible test outcomes for UnitTestOutcome
https://learn.microsoft.com/en-us/dotnet/api/microsoft.visualstudio.testtools.unittesting.unittestoutcome?view=mstest-net-1.2.0
But actually vstest.exe generate and put into trx only 3 of this types.

TFS 2017 How do I know which test is being run (before it finishes)?

I have a TFS 2017 (version 15.105.25910.0) build which also runs tests, but one test is taking a very long time and the whole build is cancelled due to a timeout set in the 'general' tab of the build edit page. TFS log is included below. How can I check which test is faulty?
Notice the time difference between the first and second log. I assume a faulty test is being run after ReportAnalyzer_Blabla_SomethingTest, but with over 1k tests it's hard to guess which it is.
2017-08-30T11:30:09.7614471Z Passed ReportAnalyzer_Blabla_SomethingTest
2017-08-30T11:53:52.1581687Z ##[debug]FindFiles.FindMatchingFiles(rootFolder = D:\TfsBuildAgents\RmsBuild\_work\8\s\TestResults, matchPattern = *.trx, includeFiles = True, includeFolders = False
2017-08-30T11:53:52.1581687Z ##[debug]FindFiles.GetMatchingItems(includePatterns.Count = 1, excludePatterns.Count = 0, includeFiles = True, includeFolders = False
2017-08-30T11:53:52.1581687Z ##[debug]FindFiles.FindMatchingFiles - Found 0 matches
2017-08-30T11:53:52.1581687Z ##[debug]Processed: ##vso[task.logissue type=warning;code=002003;]
2017-08-30T11:53:52.1581687Z
2017-08-30T11:53:52.1581687Z
2017-08-30T11:53:52.1737949Z ##[warning]No results found to publish.
2017-08-30T11:53:52.1737949Z ##[debug]Processed: ##vso[task.logissue type=warning]No results found to publish.
2017-08-30T11:53:52.2050485Z ##[error]The operation was canceled.
2017-08-30T11:53:52.2050485Z ##[debug]System.OperationCanceledException: The operation was canceled.
Normally the faulty test should be the first test after ReportAnalyzer_Blabla_SomethingTest. But as you said, it with over 1k tests, according to the log you posted, if you didn't split the tests we can not exactly identify which test is the faulty one. In this case, I'm afraid that you have to debug that one by one.
So, yo can try to split the tests then debug them accordingly.
You can also try to check if there are any other detailed logs to track that.
See Review continuous test results after a build for more information.
I've found a messy workaround which helped me find the failing test. In all test classes (the messy part) I've added a code which appends the currently running unit test name to a file - the last entry was what I was interested in.
[ClassInitialize]
public static void ClassInitialzie(TestContext testContext)
{
// This is just an example!
File.AppendAllText("testRunLog.txt", testContext.TestName + Environment.NewLine);
}
The closes thing to "run the code before each test in the whole test project" seems to be the ClassInitialize attribute.
https://msdn.microsoft.com/en-us/library/microsoft.visualstudio.testtools.unittesting.classinitializeattribute.aspx

How to get Integration Test Results with additional output like number of: total tests, failed tests etc in Sonarqube?

How to get Integration Test Results with additional output like number of: total tests, failed tests etc in Sonarqube?
My setup has jenkins integrated to sonarqube and I'm looking to get sonarqube display, the following results:
Number of Tests
Number of Tests Passed
Number of Test Failed
Number of Test Skipped
Total Execution time
As of now the Integration Test Results widget gives me only Coverage Results Output.
if you have tests unit and/or It, they sould be 'upload' on your sonarqube project dashboard. ( as the same way you should see then on your jenkins project dashboard )
Then, in 'Mesures' menu, at 'coverage' tab, there is a lots of displayed informations like nb test in failure, nb tests success, total exec time, nb tests skipped ...

Jenkins - summarising test result changes from XUnit

I'm running NUnit tests using Jenkins (and the XUnit plugin), and Email-Ext to send out build result summaries.
I'd like to be able to email out something like "3 new test failures: [Names of tests that failed]." I can't work out how to get which tests changed from a previous run.
So far I have:
${TEST_COUNTS,var="total"} tests: ${TEST_COUNTS,var="pass"} pass,
${TEST_COUNTS,var="fail"} fail, ${TEST_COUNTS,var="skip"} skipped
giving
1914 tests: 1903 pass, 10 fail, 1 skipped
and ${FAILED_TESTS} giving the details of all tests failing - but I can't work out how to get just the changes from the previous run.
Viewing the job in Jenkins gives the information I need, so it ought to be possible.
Try this one:
============================
TESTS
There are ${TEST_COUNTS, var="total"} total tests of which ${TEST_COUNTS, var="fail"} test(s) failed.
$FAILED_TESTS
Try this:
CHANGES (All changes since first failure)
${CHANGES_SINCE_LAST_SUCCESS, reverse=true}

Resources