TFS 2015 test case outcome = "None" - tfs

I'm using TFS 2015 to run automated regression tests nightly. There are over 100 tests in the suite but each night 1 of them, usually the same one, shows up with the outcome "None".
If I look at the test log I see that the test does not fail. If I remove this test from the suite then on the next run the test above that one, in code, will show as "None" but it also passes.
What could cause this and how do I determine the cause?

We upload results after finishing execution of the set of tests given
to an agent. From #1410 it sounded like all your tests are
currently going to 1 machine, which would mean all results are
uploaded only at the end. Also, the Test Results tab in Build Summary
(image you pasted above) only shows results once the run is complete.
You can go to the Runs tab in the Test Hub to see test results as and
when they are available, even for in progress runs (image below)
Source Link
So in other words, you have to wait for the full completion of the process, the root cause of the status none issue maybe either your enitre test process haven't finished yet or something special cause the process not finished.

Related

Playwright: Running certain files parallel and others serial in

I've created a few tests for my project, some test need to be run async while others can run sync.
Currently the config file is set to run async.
I've set a test file which runs them in the correct order. I've imported all the tests that should run sync into a single file. And tried adding
test.describe.configure({ mode: 'parallel' })
This changes the whole test process to run in parallel.
I can't seem to find any documentation on how to only execute certain test async and other sync. Does anyone have experience with this?
The reason I need it to run async for certain files is to log in and authenticate before continuing, also certain actions affect the layout of the whole UI (even in a different browser) and will mess up other tests screenshots.
PW runs all test in parallel by default. But there are options to serialize tests per file. You cannot have both paralel and serial tests in one file.
https://playwright.dev/docs/test-parallel#serial-mode

How to properly configure Jenkins to display Jmeter tests?

guys!
I'm running some jmeter tests on Jenkins, and for some reason it stops displaying the results of the last test run and freezes, only displaying results from a run that has been completed berofe, for example, if I start a new run, it will display the results from a previus one over and over again, no matter how many times I run the test suit... Is it any config that I need to change?
Thanks!
I'm using the performance plugin... I was analysing it a bit more and I found out that if a test fails once, this failure will be displayed everytime I run a new test and it's displayed as a failed build by jenkins... In the image, the failed test contains two outputs in the http code, 200 which stands for success and 500 which stands for failed (I believe i'm right).

how to show quantitative results in Jenkins

Jenkins test results screen shows only pass/failed results.
I would like to show quantitive (number/percentage/time duration etc') results, parsed from logs.
e.g. memory usage, run time of specific methods etc..
What is the best way to do so?
Thanks
good question.
i imagine you're probably looking to extend your Jenkins instance with some plugins that describe more info about the tests you've run. this plugin seems relevant, but requires some experience with JMeter (java-based performance measurment tool) to generate the output that this plugin can then read and display output from a JMeter task that you can run every time your build runs:
https://wiki.jenkins-ci.org/display/JENKINS/Performance+Plugin
the 'readme' in the above plugin details page specifies how to set up a project to run JMeter( see the 'Configuring a project to run jmeter performance test:' near the bottom.
another way to do similar not so immediately tied to a specific jenkins build is to run resource monitors (like Cacti or collectd) on the machines running the tests and analyze those results post-build, but again, outside of the Jenkins context.
HTH.

HTTP access to on-going Jenkins build files

I guess the title is pretty self-explanatory. The reason I want that is so that I can make a live custom HTML reporter for my tests.
My test suite takes hours to complete, and although the tests generate HTML reports as soon as each test step is executed, it's only at post-build time that those report files get published.
Being able to see them as they get generated would reduce the time it takes for me and my teammates to analyze and act upon issues revealed by our test runs.
All I need is that Jenkins let me access the build files as the build executes. Nothing fancy; I can take care of the rest. Is that possible? How?
In our setup there is always an intermediate file (typically XML) but the HTML files are created at the end of the job.
What you can do, is use the progressive output (http://jenkins/job/jobName/buildNumber/logText/progressiveText?start=0). Although you don't state which framework you use, most of them output something that would be easy to parse. e.g. "Test xxx failed".

How to make a FitNesse test require explicit running

Is there a way to mark a fitnesse test such that it will not be run as part of a suite, but it can still be run manually?
We have our FitNesse tests running as part of our continuous integration, so new tests that are not yet implemented cause the build to fail. We'd like a way to allow our testers and BAs to be able to add new tests that will fail while still continuing to validate the existing tests as part of continuous integration.
Any suggestions?
The best way to do this is with suite tags. You can mark tests with a tag from the properties page and then you can filter for the or filter to exclude them.
In this case I would exclude with "NotOnCI" tag. Then add the following argument to the URL:
ExcludeSuiteFilter=NotOnCI
This might look like this then as the full URL:
Http://localhost:8080/FrontPage?test&ExcludeSuiteFilter=NotOnCI
You can select multiple tags by splitting with commas, but they act as "or",Not "and".
Check the FitNesse user guide for more details. http://fitnesse.org/FitNesse.UserGuide.TestSuites.TagsAndFilters
Would it make sense to have multiple Suites, one for regression tests that should always pass, and another one for the tests that are not yet implemented?
Testers and BAs can add tests/suites to the latter suite and the CI server only runs tests in the former suite.
Once a developer believes he has implemented the behavior they can move the test/suite relating to that functionality to the 'regression' suite so that it will be checked in continuous integration.
This might make the status of a test/suite a bit more explicit/obvious than just having a tag. It would also provide a clear handover from development to test/BA to indicate the implementation is finished.
If you just want to have a test/suite not run during an overall run of a suite that contains the particular test/suite you could also just tick 'Skip (Recursive)' in the properties page of that test/suite (below 'Page Type').

Resources