Code coverage in system tests - ncover

We've got automated coverage builds, but they only give us numbers for our unit tests. We've also got a bunch of system tests.
This leaves us with two problems: some code looks uncovered even though it's used in the system tests (WCF endpoints, DB access, etc.); and some code looks covered even though it's only used by the unit tests.
How do I set up NCover (running on a build server) to get coverage numbers from that process (a service) while running these unit tests? All of the processes are on the same box.
In fact, we have two services talking to each other, and both communicate with an ASP.NET MVC app and an IIS-hosted WCF service; so it's actually multiple processes.
(.NET 4.0, x64. Using NUnit and MSpec. CI server is TeamCity.)

Just to clarify, are over there and over here on the same build server?
If so, I assume the basic issue is how to cover multiple services (sorry If I've oversimplified).
If that's true, unfortunately, NCover 3 can't profile more than one service at a time. However, you can cover each service individually (sequentially, not simultaneously) and then merge the coverage files.
I realize this means running NCover a couple of times in your build script, but from a coverage perspective, that's how it would work.
Does this help?

Related

TFS: only SOME tests are discovered in the same assembly

Using the very latest TFS 2017 release 2 (about to upgrade to 3 by the look of it), on-prem. Tests are MSTest. I recently consolidated all our tests into one assembly, which are usually distributed to 6 VMs to process in parallel (have been using only 2 lately for unrelated reasons, a problem which I am currently engaged with MS in solving). The tests not being discovered are not included in the total that shows in the log/console during the Run Functional Tests step of the build. It should be ~1400 total tests, but it shows only 896. As far as I can tell, the tests not being properly discovered are the ones that were in the assemblies that got consolidated out of existence. My method of consolidating was basically to move the code files (.cs) from those assemblies to the single assembly we have now and adjust the namespaces so that all the tests are in the same namespace. No other code changes.
So how can it find only some tests, when all have appropriate attributes (the build definition is set to run only tests with a category of "Automated", which all of these tests have) and all in fact are in the very same namespace? I am at a loss.

Coded ui in TFS Continuous Integration

I am new to continuous integration. I have developed windows application and its smoke testing using coded ui. I want to create build and release process in TFS and want to execute coded ui tests bases on every time new build is created. I would like to know if there are any limitations for coded ui when it comes to continuous integration? I am not able to find any article so far which explains about behavior of coded ui in TFS continuous integration
Your guidance in this regard is highly appreciated.
First off, it's worth noting that Coded UI is effectively deprecated; Selenium is recommended for web apps, and Appium is recommended for desktop apps.
That said, you need to use the Deploy Test Agent task to deploy interactive test agents to your target machines, followed by a combination of the Windows Machine File Copy (to deploy your tests to the test agents) and Run Functional Tests task to kick off your tests.

How to functionally test dependent Grails applications

I am currently working on a distributed system consisting of two Grails apps (3.x), let's call them A and B, where A depends on B. For my functional tests I am wondering: How can I automatically start B when I am running the test suite of A? I am thinking about something like JUnit rules, but I could not find any docs on how to programmatically start/manage Grails apps.
As a side note, for nice and clean IDE integration I do not want to launch B as part of my build test phase, but as part of my test suite setup.
A couple of months later and more deeply in the topic of Microservices I would suggest not to consider system tests as candidates for one single project - while I would still keep my unit- and service-level tests (i.e. API testing with mocked collaborators) in one project with the affected service, I would probably spin up a system landscape via gradle and docker and then run an end-to-end test suite in the form of UI tests.

Robot Framework use cases

Robot framework is keyword base testing framework. I have to test remote server so
i need to do some prerequisite steps like
i)copy artifact on remote machine
ii)start application server on remote
iii) run test on remote server
Before robot framework we do it using ant script
I can run only test script with robot . But Can we do all task using robot scripting if yes what is advantage of this?
Yes, you could do this all with robot. You can write a keyword in python that does all of those steps. You could then call that keyword in the suite setup step of a test suite.
I'm not sure what the advantages would be. What you're trying to do are two conceptually different tasks: one is deployment and one is testing. I don't see any advantage in combining them. One distinct disadvantage is that you then can't run your tests against an already deployed system. Though, I guess your keyword could be smart enough to first check if the application has been deployed, and only deploy it if it hasn't.
One advantage is that you have one less tool in your toolchain, which might reduce the complexity of your system as a whole. That means people can run your tests without first having installed ant (unless your system also needs to be built with ant).
If you are asking why you would use robot framework instead of writing a script to do the testing. The answer would be using the framework provides all the metrics and reports you would otherwise script for yourself.
Choosing a frame work makes your entire QA easier to manage, save your effort to write code for the parts that are common to the QA process, so you can focus on writing code to test your product.
Furthermore, since there is an ecosystem around the framework, you can probably find existing code to do just about everything you may need, and get answers to how to do something instead of changing your script.
Yes, you can do this with robot, decently easily.
The first two can be done easily with SSHLibrary, and the third one depends. Do you mean for the Robot Framework test case to be run locally on the other server? That can indeed be done with configuration files to define what server to run the test case on.
Here are the commands you can use from SSHLibrary of Robot Framework.
copy artifact on remote machine
Open Connection
Login or Login With Private Key
Put Directory or Put File
start application server on remote
Execute Command
For Running Tests on Remote Machine(Assuming Setup is there on machine)
Execute Command (use pybot path_to_test_file)
You may experience connections losts,but once tests are triggered they will be running on remote machine

Continuous integration server for Erlang code

What kind of agile tools are you using for Erlang development? What continuous integration (CI) server are you using to build Erlang code? The only reference I got was from Quora question How do I integrate Erlang unit tests in Jenkins (Hudson)?.
I am also interested in the nifty details of setting them up and making talk to each other.
As a company using Erlang actively, Klarna (www.klarna.com) use Jenkins (formerly Hudson) for daily regression test on nearly every dev commit. It's an org with about 80 people total in rnd and we use distribute mode of Jenkins which allows us to have more than 10 build slaves mastered by only one Jenkins server. Basically we have a code base with Eralng code which is version controlled by tools like svn or git. All these testcases are under common test framework and all works well under Jenkins.
Previously, we tried Cruise Control and gave it up since Jenkins does much better.
As Lukas mentioned, you probably will need a tool to gen xml files sine common test doesn't export them directly. Haven't really tried that module though, we do have an implementation of common test event handler to do the job, but it was abandoned due to performance, we do have a a critical requirement on test time. right now, we use a own made script to export xml from common test log directly.
There are a lot more you could do with Erlang and Jenkins, like code coverage analyze if you compile properly and export formatted xml to Cobertour plugin, gui test with selenium etc.
For setting up Jenkins, I think Jenkins home page has a good introduction.
Regarding agile tools, I guess it's really hard to define what a agile tool. Also what I believe is it's very much depend on the size of you org. You will probably need a good process view tool (team level or depart level), a good ticket tracking tool, code review tool, communication tool. There are bunch of them implemented under open source. According to our exp, none of them seems to be able to work seamlessly with Jenkins which means you will need to select and tweak by your own requirement. BUT that's the beauty of open source isn't it :)?
If you want to do it using Jenkins, I have written a common test hook which generates JUnit XML output for your tests which Jenkins can use to produce test statistics.
https://github.com/garazdawi/cth_tools/blob/master/src/cth_junit.erl
We use Jenkins for our Python code, so I think you may use Jenkins with Erlang code.
We use buildbot with our own recipes to hook unit tests.

Resources