Extent Report: Klov: How to limit the visibility of TestCases from other projects, when viewing test case history/Search - extent

I am facing issues while comparing the test cases in History, as same Test case is referred in Multiple Projects. Currently the KLOV is fetching the test case data from all the projects, and display in history section.
Is there any way we can limit the visibility of test case at a project level ?
KLOV Version: 0.1.1
Extent: 3.1.5
Thanks,
~Sundan

Related

How to use gitflow for one release

I'll start working on iOS app from scratch that is simply a replication of an existing android app.
There're 7 modules in this app (Login, register...), the client want to test each module upon finished so the app will be published to the app store when all modules are completed and well tested.
I'll use git-flow in this project and it has many branches (master, develop, release, features...)
My question is how can I use "release" branch in that case when I have only one version (1.0) that will be published until the end of project ?
And how I can manage delivering an IPA for a module (feature) to test when a new module (feature) is already in development phase ?
You're overthinking it.
Tags in master don't represent code that is "released to the public", it represents code that has "finished development and testing". These tags are not public, they are markers to help the development team manage the code.
Publishing code to the users (public or private) is a business decision that is NOT relevant to git-flow or any other branching model. Saying that something meets the expected requirements is a project decision that IS relevant. *
So for the situation you describe, I think the following process (or something similar) makes sense:
Create a feature branch of of develop
Complete development work (including code review and developer testing)
Merge feature back to develop
Create a release branch for QA testing **
After testing (and fixing) is complete, merge release to master and tag it as version "0.1"
Merge release to develop
Repeat for version "0.2", "0.3", etc.
In master you will have tags "0.1", "0.2", etc. These represent incomplete versions of the application that are functional and stable, but not suitable for complete release. Finally, when you have version "0.7" (or however many cycles this takes) the business will make the decision that "this version of code is complete and suitable for release". Then (and ONLY THEN) do you create the tag "1.0" in master.
Future development will then use the tags "1.1", "1.2" etc. In short, in this versioning scheme the first number represents the "released version" (as determined by the business) and the second number represents the "development iteration".
*Obviously these two concerns/processes interact and inform each other, but that is an entirely different topic.
**You don't have to do this for every feature branch, but it sounds like you're working alone and will be developing things sequentially. It's completely reasonable to merge multiple feature branches and create a single release branch for testing.

Test results for group of tests (ordered test) are displayed only like one test result per group

in our company, we have many UI tests which are arranged in playlists (*.orderedtest file). In release definition are running this tests like ordered tests: VSTest_configure.
After running tests, the test results are displayed only like one test result per one playlist (*.orderedtest): Test Results. But all of the tests from the playlists was running. When I opened the *.trx file for one *.orderedtest (e.g. ListFrame1) in Visual Studio, I can see test results:
TestResultTRX and after doubleclick on the test name "playlist_listframe1" I can see all of the test results from playlist ListFrame1: Results from all tests of the playlist ListFrame1.
Do you know anybody why the results of all tests from playlist are not displayed in test results on TFS web? This issue began to emerge after TFS was updated on the new version of tfs 2018 update 3.2 and resave the release definition with VSTest tasks. In the VSTest task is now in the "Advanced execution options" the new settings "Batch tests". Is it possible, that this new functionality changed publishing or creating test results?
The expected state is that the results should be displayed for all of the tests from the playlists. Just like before updating TFS 2018 update 3.2: Expected Test Results
Please, give me an advice. Thank you.

How to version assemblies—pre-build—based on work items

I'd like to automatically increment my assembly versions based on this ruleset:
Revision is always 0
Build is incremented when the only WIT in the release is a Bug fix
Minor is incremented when the release contains any WIT other than a Bug fix; Build is then always set to 0
Major is never automatically incremented
Naturally this will require a build step that can interact in some way with the project.
My first thought was to build a small Windows Service that utilizes the TFS SDK to construct the version number based on these rules and return it via a WCF call, etc. But I run into a problem there with a business requirement that all code and functionality must be replicated into a VSTS project as well (the customer owns the code and must be able to proceed without me). There's no installing such a service there, of course.
I then considered installing the service on his server, in turn making it available to VSTS. This would pass the Rube Goldberg test with flying colors.
Is there an easier way of accomplishing this task? One that can work in both environments?
EDIT
I found this, but it's doubtful that the TFS SDK is registered in the GAC for VSTS.
Can someone confirm? Is the TFS SDK available to build scripts running on VSTS?
Well now that didn't take long.
I found this and this for using PowerShell to query the REST API. No GAC/SDK needed.
-- EDIT -----------------
I've intentionally excluded content from the pages behind these links as the solutions provided are exceedingly complex; it's not possible to cover the concepts here in a single post. In case the pages disappear or the URLs change, here are the links at archive.org:
1. PowerShell and vNext Builds
2. VSTS/TFS REST API: The basics and working with builds and releases
In any case, the concept is popular and well-covered—in the event these two become inaccessible, there are many others available on the same subject matter. As quickly as I found these, someone could find more.

Tracking results between different projects in Jenkins

I test some software that has different releases running concurrently. This is in the usual "LTS" "Release", and "Unstable" versions. We have a third-party test suite that we like to run on all three versions. Because it is third-party, there are expected failures in the builds that execute the suite.
Currently, we have Jenkins set up with a different project for each of the point releases for each of the three versions. It is easy for us to track the results of the test suite on the same project, but that limits us to just the point release. We want to be able to compare and get trends for the test suite between the three versions as well as crossing the point releases. We want to know if the project for LTS 3.2 has overall more failures than the project for LTS 3.3.
I'm not seeing an obvious plugin for graphing the number of pass/fails between different Jenkins projects, is there a method for doing this?

Run test plan against 3rd party versioned programs

Using Visual Studio Online I created a test plan for a program that was written by a different company that my company uses. We have a specific set of tests that need to be tested before we accept a new version of this program. So when I edit the test plan I would like to be able to manually select a build by typing in say version "1.0.1.195". Then when a newer version comes out I can just type in a newer version and retest using those same tests. However when I go to select a build TFS is filtering against my builds for my code. Is it possible to do what I'm asking using TFS?
EDIT
To answer a few of the questions in the comments I'll be a bit more descriptive of what I am doing. A 3rd party company made a program we use to test some hardware. Every now and then there is an update to that software. Since a few of us use this program to test out the hardware we need to know that the software can be installed with little to no down time while upgrading. So we came up with a small set of tests that we run the program through to make sure that we can test reliably. Those tests were written in a Word document, so I put them into MTM. Although I make some software that is related to this their software depends on mine. I've not had to update my code for some time now. My overall intention is to use MTM to document my testing of this program.
Do you want to store the version of the 3rd party component along with the test result of the test run it was tested with on TFS?
That would be nice. My ultimate end game is to put the results of said test back into that Word Document and make that available to those who don't have MTM installed (which is everyone). This way when a new version of the software is updated I can just go into MTM reset all my tests back to active update the version number and retest.
The build you set up in Microsoft Test Manager (MTM) defines where is the drop location containing your tests, not the application under test (it can be different if you build your tests using another build).
That's why you only can select one of your builds for your code.
What you are talking about is deployment.
That means you have to make sure the right version of the 3rd party program is deployed to the environment the tests are running on.
EDIT
What you need is a Test Configuration
Here you can find a is a good explanation how to create one: Test Configurations - specifying test platforms
The idea in your use case would be as following
(below I'm using terms described in the article mentioned above):
Create a Configuration Variable where you will store the current version of the 3rd party program
Create a Test Configuration and add this variable to it.
Set this Test Configuration as default test configuration for your test plan.
Be aware, if your test plan already contains test cases you will have to add this Test Configuration to each Test Case manually since only new added Test Cases get it assigned automatically
If you get a new version of the 3rd party program you will:
Add the new version of the program to the values allowed for the Configuration Variable
Open the Test Configuration you are using and update the program's version to the new one.
Reset your tests and run them.
Doing so you:
store all versions you have tested so far in the Configuration Variable since you add the new one instead of overwrite the old one, so you get a kind of history.
store the last version you have tested in the Test Configuration.
That should meet you needs.
Additional suggestion
(has nothing to do with your question but with your use case)
Consider describing tests within your Test Cases instead of creating a Word document.
Here is a good place to start reading: How to: Create a Manual Test Case
The benefits would be:
You can run your manual tests using Test Runner provided by MTM
Doing so you will have all steps you have done stored by the Test Result, you can add comments to each step when executing that, etc.
You can still export the test description to a Word document using this MTM add-on: Test Scribe.
Using this add-on you can also create a report of your test execution.
Additionally, if you are going to use MTM more in your daily job I would recommend you this free e-book Testing for Continuous Delivery with Visual Studio 2012

Resources