TFS: only SOME tests are discovered in the same assembly - tfs

Using the very latest TFS 2017 release 2 (about to upgrade to 3 by the look of it), on-prem. Tests are MSTest. I recently consolidated all our tests into one assembly, which are usually distributed to 6 VMs to process in parallel (have been using only 2 lately for unrelated reasons, a problem which I am currently engaged with MS in solving). The tests not being discovered are not included in the total that shows in the log/console during the Run Functional Tests step of the build. It should be ~1400 total tests, but it shows only 896. As far as I can tell, the tests not being properly discovered are the ones that were in the assemblies that got consolidated out of existence. My method of consolidating was basically to move the code files (.cs) from those assemblies to the single assembly we have now and adjust the namespaces so that all the tests are in the same namespace. No other code changes.
So how can it find only some tests, when all have appropriate attributes (the build definition is set to run only tests with a category of "Automated", which all of these tests have) and all in fact are in the very same namespace? I am at a loss.

Related

only test changes incremental in vnext with tfs 2017

im currently facing the issue that i am not able to test "only the things that have changed" in a change-set using vnext with tfs 2017.
when using the "run functional test" step, i can only choose a test assembly but it will always test the changes with the solution i've picked and all tests within the test assembly.
i've tried to split test assemblys into more test categories but running all 2000 tests against a change on 1 file seems little bit too much.
is there a way to only run tests against source code that has been changed?
we want to decrease test time.
You can set multiple test assemblies and separate by semicolon, such as you can specify **\commontests\*test*.dll; **\frontendtests\*test*.dll as Test Assembly for run functional tests task.
If changes come from your feature project, you should test all assemblies. If changes come from part of your test projects, you can only test the changed test assemblies. You can achieve it by two build definition: first build definition with power shell task, second build definition is your current build.
Power shell task in first build (detect what changes and queue second buils):
If the feature project changed, specify your second build with **\*test*.dll for test assembly, and then queue your second build.
If part of test projects changed, specify your second build with **\*project1test*.dll; **\*project1test*.dll etc for test assembly, and then queue your second build.

One build definition won't generate fakes assemblies, another one does

Introduction
I have a problem with Team Foundation Server Express 2013 on my machine. I have two build definitions on the same controller and agent, both of which run on the same server and the same environment as well.
It should be noted that I already looked at the "similar questions" without any luck. This is clearly not related to the same root cause, and the symptoms are slightly different too.
One of them is a gated check-in build definition, which just compiles everything when commiting to the development branch.
Another is a scheduled build definition, which runs every saturday at 3 AM, building any changes that may have been committed to the main branch since last time.
The gated build definition has a process (which only has minor changes for not running tests and just compiling the code) based on the TfvcTemplate.12.xaml template.
The scheduled build definition process is based on some Azure build definition template that might come from an older version of Visual Studio, possibly based on some Azure template, or maybe the TfvcContinuousDeployment.12.xaml template.
The issue
My gated build definition runs just as expected, without issues. It compiles the full solution, and only passes if the compilation succeeds.
The shceduled build definition however fails compiling (even before it reaches the point where it runs the unit tests). The error I see is as follows.
Obviously this is due to missing fakes assemblies. I tried taking the assemblies and checking them in (which I would rather avoid), only to find that this build definition runs just fine, but not the gated one which ran just fine before.
I thought about just running fakes.exe in the build template to just generate them manually before compiling, but in my initial tests (to see if this theory would even work), it won't even run in the commandline, and outputs some errors and warnings that I don't understand (but are probably not relevant anyway, since I might be running fakes.exe with improper arguments).
Updates
Update #1
It should be noted that I have Visual Studio 2013 Ultimate installed on my build server as well. Both this (and TFS 2013 Express) have Update 3 installed, and the server is fully updated.
I ended up abandoning fakes all together, and implementing Moq instead. Works a lot better, and forces me to abandon shims or moles, which are often considered bad practise anyway.

Chutzpah integration with TFS 2012

My team is trying to integrate Chutzpah into the TFS 2012 build process. We used this blog post as our starting point.
At a high level, the practical issue is that the Visual Studio Test Runner in the build agent context simply isn't finding the Chutzpah hooks. So while we can define **\*.js as a test source, without the Chutzpah bootstrapper actually being found and initialized, the test runner doesn't do anything with these files.
At a more detailed level, we are getting three concerning messages when we check the logs for loading the custom assemblies for the build controller:
Summary: There were 0 failures, 2 errors and 1 warnings loading custom activities and services.
Error: Method 'ToXml' in type 'Chutzpah.VS2012.TestAdapter.ChutzpahAdapterSettings' from assembly 'Chutzpah.VS2012.TestAdapter, Version=2.2.0.171, Culture=neutral, PublicKeyToken=1ca802c37ffe1896' does not have an implementation.
Error: API restriction: The assembly '...\AppData\Local\Temp\VSTFSBuild\8c8e9402-1169-4782-99a9-ce42f83be8f0\A1288811191\Chutzpah\Microsoft.VisualStudio.TestPlatform.ObjectModel.dll' has already loaded from a different location. It cannot be loaded from a new location within the same appdomain.
Warning: Could not load file or assembly '...\AppData\Local\Temp\VSTFSBuild\8c8e9402-1169-4782-99a9-ce42f83be8f0\A1288811191\Chutzpah\phantomjs.exe' or one of its dependencies. The module was expected to contain an assembly manifest.
Other than this information, we're more or less stuck. I'd love to hear from someone who has actually got Chutzpah running on a standalone 2012 build server so we can compare configurations.
Error: API restriction: The assembly '....
Indicates that TFS is finding a dll in two different locations. You can change the pattern to Check and see if you have \*\*test.dll as the value set for any test assemblies and change it to *test*.dll. This will prevent it loading it multiple times.
The other error might be because the test projects are not being rebuilt.
Try re-building the test projects.
I hope this helps.
Actually Software Carpenter has a point, I believe what may be happening is your test spec for ordinary unit tests may be test.dll (or something similar) this means Microsoft.VisualStudio.TestPlatform.ObjectModel.dll is loaded as a unit test dll, then TFS tries loading it again when trying to run the chutzpah tests.
Try disable the normal unit test spec and see if it helps, if it does then change your spec to be something else (*test.dll) that doesn't incude TestPlatform.ObjectModel.dll.
Source: I just had the same error when trying to build a project with Test in its name.
This discussion is also taking place on Codeplex. Maybe some help can be provided there.
Javascript Unit Tests on Team Foundation Service with Chutzpah

Code coverage in system tests

We've got automated coverage builds, but they only give us numbers for our unit tests. We've also got a bunch of system tests.
This leaves us with two problems: some code looks uncovered even though it's used in the system tests (WCF endpoints, DB access, etc.); and some code looks covered even though it's only used by the unit tests.
How do I set up NCover (running on a build server) to get coverage numbers from that process (a service) while running these unit tests? All of the processes are on the same box.
In fact, we have two services talking to each other, and both communicate with an ASP.NET MVC app and an IIS-hosted WCF service; so it's actually multiple processes.
(.NET 4.0, x64. Using NUnit and MSpec. CI server is TeamCity.)
Just to clarify, are over there and over here on the same build server?
If so, I assume the basic issue is how to cover multiple services (sorry If I've oversimplified).
If that's true, unfortunately, NCover 3 can't profile more than one service at a time. However, you can cover each service individually (sequentially, not simultaneously) and then merge the coverage files.
I realize this means running NCover a couple of times in your build script, but from a coverage perspective, that's how it would work.
Does this help?

How to setup ASP.Net MVC solution for quickest build time

I want to find the best setup for ASP.Net MVC projects to get the quickest code-build-run process in Visual Studio.
How can you set up your solution to achieve near zero second build times for small incremental changes?
If you have a test project, with dependencies on other projects in your solution, a build of the test project will still process the other projects even if they have not changed.
I'm don't think it is entirely rebuilding these projects but it is certainly processing them. When doing TDD you want an near zero second build time for your small incremental changes, not a 20 - 30 second delay.
Currently my approach is to reference the dll of a dependent project instead of referencing the project itself, but this has the side effect of requiring me to build these projects independently should I need to make a change there, then build my test project.
One small tip, if you use PostSharp, you can add the Conditional Compilation symbol SKIPPOSTSHARP to avoid rebuilding the aspects in your projects during unit testing. This works best if you create a separate build configuration for unit testing.
I like Onion architecture.
Solution should have ~3 projects only =>
Core
Infrastructure
UI
Put 2 more projects (or 1 and use something like nUnit categories to separate tests) =>
UnitTests
IntegrationTests
It's hard to trim down more. <= 5 projects aren't bad. And yeah - avoid project references.
Unloading unnecessary projects through VS might help too.
And most importantly - make sure your pc is not clogged up. :)
Anyway - that's just another trade off. In contrast to strongly typed languages - dynamic languages are more dependent on tests but it's faster and easier to write them.
Small tip - instead of rebuilding whole solution, rebuild selection only (Tools=>Options=>Keyboard=>Build.RebuildSelection). Map it to ctrl+shift+b. Original map remap to ctrl+shift+alt+b.
Here's how you could structure your projects in the solution:
YourApp.BusinessLogic : class library containing controllers and other logic (this could reference other assemblies)
YourApp : ASP.NET MVC web application referencing YourApp.BusinessLogic and containing only views and static resources such as images and javascript
YourApp.BusinessLogic.Tests : unit tests
Then in the configuration properties of the solution you may uncheck the Build action for the unit tests project. This will decrease the time between you press Ctrl+F5 and you see your application appearing in the web browser.
One way you can cut down on build times is to create different build configurations that suit your needs and remove specific projects from being built.
For example, I have Debug, Staging, Production, and Unit Test as my configurations. The Debug build does not build my Web Deployment project or my Unit Test project. That cuts down on the build time.
I don't think "code-build-run" is any way a tenet of TDD.
You don't need zero-second build times -- that is an unreasonable expectation -- 5 - 10 seconds is great.
You're not running tests after every tiny incremental change. Write a group of tests around the interface for a new class (a controller, say, or a library class that will implement business logic). Create a skeleton of the class -- the smallest compilable version of the class. Run tests -- all should fail. Implement the basic logic of the class. Run tests. Work on the failing code until all tests pass. Refactor the class. Run tests.
Here you've built the solution a few times, but you've only waited for the builds a total of 30 seconds. Your ratio of build time to coding time is completely negligible.

Resources