Which tests should be run since a previous TFS build? - tfs

My managers want we to determine which tests might have to be run, based on coding changes that were made to the application we are testing.
But, it is hard to know which tests are actually needed to be re-verified as a result of a code change. What we have done is common to test the entire area where the code change occurred / or the entire proj, solution.
We were told this could be achieved by TFS build or MTM tools. Could someone share the details?
PM:We are running on TFS 2015 update4,VS2017.

There is a concept of Test Impact Analysis which helps in analysis of impact of development on existing tests. Using TIA, developers know exactly which tests need to be verified as a result of their code change.
The Test Impact Analysis (TIA) feature specifically enables this – TIA
is all about incremental validation by automatic test selection. For a
given code commit entering the pipeline TIA will select and run only
the relevant tests required to validate that commit. Thus, that test
run is going to complete faster, if there is a failure you will get to
know about it faster, and because it is all scoped by relevance,
analysis will be faster as well.
Test Impact Analysis for managed automated tests is available via a checkbox in the 2.* preview version of the VSTest task.
If enabled, only the relevant set of managed automated tests that need to be run to validate a given code change will run. Test Impact Analysis requires the latest version of Visual Studio, and is presently supported in CI for managed automated tests.
However this is only available with TFS2017 update1(need 2.* preview version of VSTS task). More details please refer this blog: Accelerated Continuous Testing with Test Impact Analysis

Related

Run specific test with every integration in TFS

How to make specific test cases run when changes are made to a specific component in TFS?
For example- I have 10 test cases for component A and 10 test cases for component B. When a developer merges a code reated to component A, I want only the test cases related to component A to run
What you need is Test Impact Analysis. Described as this MSDN article,
Test Impact Analysis (TIA) helps in analysis of impact of development on existing tests. Using TIA, developers know exactly which tests need to be verified as a result of their code change.
To enable test impact analysis in a build process, you need to:
1). Configure test impact analysis in a test settings file. Check the "Enabling Test Impact Collection" part of this article.
2). Specify to use the testsetting in the build definition.
3). Set Analyze Test Impact to be true. Check the Enable Test Impact Analysis part of this article for the details.
Additionally, it is possible for you customize your build process template to only run impacted tests: http://blog.robmaherconsulting.co.nz/2011/03/tfs-2010-build-only-run-impacted-tests.html

Reports from Jenkins and Jira

We are using Jenkins to run the selenium automation tests and my manager wants to see the list of failed builds and what percentage of the tests passed for the builds. We also have manual tests that get executed in JIRA. I need to combine both and derive the test metrics from them.
The way I think of proceeding is as follows:
Get the Jenkins data in JIRA first using the Jenkins plugin for JIRA.
Use the jira api to collect the testing results from Jenkins and manual tests run on jira.
Prepare a dashboard in JIRA to display all the metrics
Could you suggest if the above approach is correct and suggest something additional.
Thanks in advance!
Are you using cucumber? In that case you could use the cucumber reporting plugin for jenkins. If it doesn't suit your needs but you still use cucumber you can also generate reports in a format like JSON, which you could later parse and get your data.
I have the feeling what you want to do seems a bit complicated, and with not a big benefit. If the tests are failing it's likely you'll have to see what is happening. Having the percentage is sure nice, but I think you can spend some hours/days tailoring this just for having something cute that your manager wants but that has no specific purpose. I would opt for something simpler.
If the automated tests fail, create a jira issue automatically with jenkins. You could put the build number as a tag, or in the title. You can also create it always to indicate that build nr. ## was tested and everything went ok.
As a part of the manual testing process, report in jira what failed.
Create a dashboard and play a bit with tags and search to show which builds failed.
I would suggest AssertThat BDD & Test Management in Jira
Provides end-to-end integration - from features creation to manual and automated tests execution and reporting. Out of the box integration with test automation frameworks through plugins.
The plugin allows to download feature files stored in Jira before the run, execute the test in the usual way and then upload cucumber tests results back to Jira, which gives you a clear view on the testing progress in one place.
More info and usage examples on website https://www.assertthat.com/

Exclude specific paths during triggering TFS team build

I'm configuring continuous integration with TFS 2012. I have one problem need solve.
I need to exclude some paths from triggering builds.
For e.g. I have:
$/Project1
$/Project2
And I want that after each check-in of $/Project1 - build has been triggered. And it must build both $/Project1 and $/Project2.
But after checking in $/Project2 I don't want to trigger a build for that Build Definition.
In Source Settings of Build Definition are only functions "Active" and "Cloaked", but it isn't what I need.
Thanks a lot in advance.
P.S. The worth solution is to add the comment ***NO_CI*** on check-in. It will be great if there is some other way.
Based on the comments, this boils down to an X-Y problem. You can't do what you want to do, but the reason you want to do it is because you're trying to solve the wrong problem.
You're running the UI tests at the wrong point in your dev-test cycle. UI tests should not run during a build, they should run after a release. A change to your test project should absolutely result in a build.
Someone is developing UI tests against code that's not yet in source control, which makes no sense. If someone is writing tests against code, the code should be source controlled.
I'm guessing that someone is manually pushing uncommitted code out to a dev server, which is being used by someone else to write tests. Don't do this. Use a real release management solution so that as developers write code, each check-in is automatically deployed to a dev/QA environment. Then the folks writing the UI tests will have something to test against. What's the point in writing tests against code that's in such a state of flux that the developer responsible for it isn't even sure it's worth being source controlled? That just results in spending a lot of time rewriting tests as the code evolves.
Assuming you set everything up properly, every commit of the application code will result in the current set of tests being run against the latest commit. Every commit of the test code will result in the new set of tests being run against the existing application code. The two things (application code and test code) should be coupled, and should always build together.
And one last thing, mostly opinion: UI tests are awful and serve very little utility. They are brittle, slow, and hard to maintain. I have never seen a comprehensive UI test suite actually provide value. UI tests are best served as a small set of post-release smoke tests. Business logic should be primarily unit tested, with a smaller suite of integration tests to back it up.

automated tests and continuous integration in tfs 2010

Some background: I am not a tfs guy and I dont know much about build scripts etc.
1 - Is there a way to run tests for every check-in TFS? What I'm dreaming is, if any of the tests is failing then build server rejects the changeset. Is it possible with TFS or should it be some other tool like Hudson, Cruise Control etc? What are the other powerful tools?
2 - Does using such a tool make it possible to run only portion of tests, not all of them (ie only unit tests, not integration tests)?
I am not interested in technical details like how it can be done technically, as it is our tfs team's job. Rather I am after some high level info about the possibilities?
In TFS you have what's called check-in policies. With those in place you could forbid checking-in something without all of the unit tests passing. You could even enforce FxCop rules, etc... but that would be cruel to your developers.
If you already have the Continuous Integration build set up, then change the trigger to Gated Check In, this will do exactly what you want. When a developer tries to commit, TFS will start the build, if the build fails then the check in will be aborted and instead TFS will create a shelveset of the changes.
As for running a portion of the tests, you would probably need to create a test list in your VSMDI that defines the tests you want to run and then configure the build to use that list.

Bug/issue tracking integration with Cruise control

I am putting together a bunch of applications to create an automatic building for microsoft platform (the products I chose and the software I will build, both, runs on windows). The products I've chosen are:
Code repository: SubVersion
Continuous integration: CruiseControl
Unit testing: NUnit
Test coverage: NCover
Static code analysis: FXCop
Now I need to choose a bug/issue tracking system (free if possible) that can be, in some way, integrated with the previous products.
What I mean by integration? Well, all these products have a file as output I want to be able to publish errors and bugs found by them into the tracking system.
Do you know some product, some technique or trick that can help me to do this?
Thanks in advance.
First off, these are all tools I have experience with and congratulate you on your choices - these tools will serve you well if you use them wisely.
The most common usage of these tools is that CC would fail the build if certain criteria are not met, e.g.:
A unit test fails
Code coverage falls below a certain threshold
FXCop detects a violation of a certain severity
Because the build would fail and in continuous integration a failed build should be fixed immediately, you wouldn't really need to put those issues into a bug tracking system. Think of build-failing errors as being as severe as the code not compiling - you drop everything and fix right away.

Resources