How to build dependent tests for regression testing - asp.net-mvc

I have an ASP.Net MVC project and I thought I could use a tool like MS Test or NUnit to perform regression testing from the controller layer down to the database, however I hit an issue where tests are not designed to run in order (You can use ordered tests in MS Test, but the tests still run concurrently) and the other problem is how to allow the data created from one test accessible to another?
I have looked at Selenium and WatiN but I just wanted to write something that is not dependent on the UI layer which is most likely going to change an increase the amount of work to maintain the tests.
Any suggestions? Is it just the wrong tool for the job? Should I just use Selenium/WatiN?

Tests should always be independent of each other, so that running order doesn't matter. If your tests depend on other tests you are losing control of what you are testing.

WatiN, and I'm assuming Selenium, won't solve your ordering problem. I use WatiN and NUnit for UI automation and the running order is not guaranteed, which initially posed similar problems to what you're seeing.
In the vein of what dskh answered, you want independent tests, and I've done this in two ways for Integration / Regression black-ish box testing.
First: In your test setup, have any precondition data values setup so you're at a known "good state". For system regression test automation, I've got a number of database scripts that get called to reset data to a known state; this adds some dependencies so be conscious of the design. Note: In straight unit testing, look at using mock objects to take out dependencies and get your test to be "testing one thing". Mock objects, stubbing method calls, etc is the way to go if you can, which based on your question sounds likely.
Second: For cases where certain things absolutely had to be setup in a certain way, and scripting them to test setup added a ridiculous amount of necessary system internal knowledge (eg: all users setup + all permissions setup + etc etc etc) a small number of "bootstrap" tests were setup, in their own namespace to allow easy running via NUnit, to bootstrap the system. Keeping the number of tests small and making sure the tests were very stable was paramount. Now, on a fresh install, the bootstrap tests are run first and serve as advanced smoke tests; no further tests are run if any of the bootstrap tests fail. It is clunky in some ways, but the alternatives were clunkier or more time/resource/whatever consuming.

Update
The link below (and I assume the project) is dead.
Best option maybe using Selenium and the Page Object Model.
See here: http://code.tutsplus.com/articles/maintainable-automated-ui-tests--net-35089
Old Answer
The simplest solution I have found is Rob Conery's Qixote:
https://github.com/robconery/Quixote
It works by firing http requests and consuming responses.
Simple to set up and use and provides integration testing. This tool also allows a series of tests to be executed in order to create test dependencies.

Related

Multiple feature file launching with single browser instance in Specflow

When I am running my test solution, single browser is getting launched but it is running two feature file simultaneously due to which test cases are failing. One step it is taking from one feature file and other from other feature file.
Contrary to the comment left on your question, I think I may have enough context to answer you.
You describe feature files that are sharing steps and concerns about multiple browser instances. This tells me that your various step files might each be containing a browser instance.
What you're likely looking to do instead is to use a SpecFlow Context -- SpecFlow provides it own ScenarioContext object you can use, or you can create your own context and inject it.
Some links that might help:
SpecFlow docs on sharing data between bindings, which explains about ScenarioContext and FeatureContext:
https://docs.specflow.org/projects/specflow/en/latest/Bindings/Sharing-Data-between-Bindings.html
Here's an article on using SpecFlow with Selenium and the Page Object Model: https://docs.specflow.org/projects/specflow/en/latest/ui-automation/Selenium-with-Page-Object-Pattern.html
The SpecFlow YouTube channel will likely be helpful as it's full of experts walking through these sort of examples: https://www.youtube.com/c/SpecFlowBDD/videos
Here's the first video in a 5 part series on how to automate a web application with Selenium and SpecFlow: https://www.youtube.com/watch?v=y1dAogvWVh8
Based on your question, it's also possible that your issue could be that you want things to run in parallel, but have the problem of your tests being dependent on one another or running in a certain order. This will be a bit more complex to solve.
I strongly suggest you treat your tests so that they can be run in isolation. You may need to add separate data to a database, or operate your tests so that they're not touching the same thing. This takes more work, but is more than worth it, because it will enable better maintainability and reliability of your tests and also ensure they can run in parallel successfully.
I hope this helps!

Grails integration test - how to use different datasources for different test

I am trying to figure out a way to execute certain integration tests against an in memory DB (H2) and others against our Oracle test DB. Maybe its my limited test writing experience but it seems that some tests (such as search querying) are more suited for in memory as I can control the data set queried, and others such as testing transactions/persistence would benefit from going against our REAL schema and DB (Oracle).
I can think of 2 approaches but do not know how to implement either:
add a new test phase so that I can have integration-test-in-mem and integration-test (using oracle) and have different tests run in different phases and configure each for the different DB
have each test control which datasource is used
I would prefer the first as its cleaner and I don't have to pollute my test with logic to control which datasource it uses.
Also, the second is not simply setting different datasources by domain - I want to reuse the same domain in different tests against different DBs.
Any ideas appreciated and if you've done this please share! We do use SPOCK.
Here is a blog article I've found on adding custom test phases/types by Luke Daley. Has anyone implemented this? Now that I've read that and understand terminology better I think what I would like to do is set up new types - not phases. Unfortunately though since we are using spock we are already basically using a custom type. Though we could leave spock as one of the 2 types and potentially create a 'SPOCK-IN-MEM' type although this may require redefining the spock type which might not work. Any advice welcome. I would say that this seems to come up often enough (I've sen this question asked by others in other forums) that there should be a simpler way to go about it.
One more finding. There is an environment plugin for spock which adds an annotation to have tests run ONLY for the environment annotated. Its reusing the ignored tests capability of spock and is quite small, simple, and clean. The only downside is its for spock which is not an issue for our group.
A simpler way of defining phases would be nice - like a naming convention. It would be nice to be able to define phases/types with just a directory naming convention such as test//. Just create the folders and away you go. Then you could control execution by just explicitly setting phase/type/env in args when running test-app.

Why should we use coded ui when we have Specflow?

We have utilized Specflow and WatIn for acceptance tests at my current project. The customer wants us to use Microsoft coded-ui instead. I have never tested coded ui, but from what I've seen so far it looks cumbersome. I want to specify my acceptance tests up front, before I have a ui, not as a result of some record/playback stuff. Anyway, can someone please tell me why we should throw away the Specflow/watin combo and replace it with coded ui?
I've also read that you can combine specflow with coded ui, but it looks like a lot of overhead for something which I am already doing fine in specflow.
I wrote a blog post on how to do this you might find useful
http://rburnham.wordpress.com/2011/03/15/bdd-ui-automation-with-specflow-and-coded-ui-tests/
The pro's and con's of Coded UI Test that i can think of is your testing the application exactly how the user will be using it. This is good for acceptance test but it also has its limitations. Its also really good for end to end testing. In the past UI Tests have been know to be fragile. For example when MS created the VS2010 UI almost all of the UI tests broke. The main reason being is the technology change. Coded UI tests do help to limit this from happening by the way it matches a control. It uses more of a probability based match. This mean it will try to find the best match based on the information it has such as control name. For us Coded UI tests was our choice because of technology limitations. Our Legacy app is VB and although CUIT does not work great, i'm in the progress of writing an extension to get better control information, it was still our only choice. Also keep in mind CUIT is new and has its own limitations. You should be prepared to be very structured in the way you lay out your project as maintaining your UIMaps can be a bit of manual work due to the current end to end behaviour in VS2010, for example creating a CUIT from an existing action recording always places the test in a UIMap called UIMap.uitest and there is no way to change that or transfer to another UIMap. If you use multiple ui maps this means you will need to record your steps first and then use them in your test. However being in .net it its still very flexible.
By far the best thing about specflow is its gerkin syntax for readability and living documentation. Normally your testing features or behaviours of your app which is where the value comes from It generally aims the test just below the UI. There is a little less chance of the test breaking when the UI changes here but there. Specflow to me is great when your application is under constant change and you want to ensure existing features remain working. It fits well in a Scrum environment as well where you can write your scenario's as a description about how it should work. One limitation to specflow i can see is its open for interpretation. Because of this it can be easy to write a test that is not very reusable and hard to maintain. I like to use more generic terms to describe my steps like "Log in as User1" instead of "Go to Login Page, Enter Username and Password, Click login". Describing it more granular makes it harder to reuse tightly couples it to the UI. How the login actually work should be up to the code behind not the specflow feature.
Combining the 2 however to us seems more beneficial than just using Coded UI Tests. If we decide to completely change the UI we would at least have the behaviours that are expected stored in our specflow features in a way anyone can understand. In the end you need to consider how the application will evolve and the type of application.

Integration tests implementation

I have a three tiered app
web app (asp.net mvc for simplicity here),
business services
data repositories
And I know there are four types of integration tests:
top down
bottom up
sandwich (combination of the top two)
big bang
I know I would write big-bang tests just like unit tests but without any mocking so I would employ a backend DB as well...
Questions
I don't know how to write other types of integration tests?
How do I write non-bigbang types of integration tests?
Should integration tests be equal to unit tests meaning same number of tests, but testing without mocks? Or should these tests test something completely different?
Could anybody provide any information how to do this (if at all) or whether it's actually feasible doing it?
I suggest doing these:
unit tests / must not hit any external resource
focused integration tests (I guess that'd be bottom up). You should have code that is very close to the external resource, and the sole responsibility of it is integrating with it. Don't try to unit test those classes, instead do very focused tests that hit the real resource and don't have to deal with the rest of the logic in your system. Keep this integration classes as thin as possible
full system tests (I guess big bang). I mean with the UI and everything (or API if that's your endpoint). Make sure you cover as much as possible with the previous tests, and this is more like simple checks the underlying pieces are hooked appropriately.
Depending on your system, you may or not want to complement 3 with integration tests at the top level of the code, but without involving the UI. Regardless of which option you take, make sure to have more comprehensive coverage through unit & focused integration tests, as testing various behavior at the top level have a complexity level that can get out of control very quickly.
Should integration tests be equal to unit tests meaning same number of tests, but testing without mocks? Or should these tests test something completely different?
As I mentioned in 1 & 2, its best when those test different things. This depends on the system, but I'd usually expect the amount of unit tests to be a few times the amount of integration tests. For full system tests, make sure you have enough so that you can tell all the pieces were hooked correctly, but not so much that it becomes too complex to test each scenario.

BDD with Cucumber and rspec - when is this redundant?

A Rails/tool specific version of: How deep are your unit tests?
Right now, I currently write:
Cucumber features (integration tests) - these test against the HTML/JS that is returned by our app, but sometimes also tests other things, like calls to third-party services.
RSpec controller tests (functional tests), originally only if the controllers have any meaningful logic, but now more and more.
RSpec model tests (unit tests)
Sometimes this is entirely necessary; it is necessary to test behavior in the model that is not entirely obvious or visible to the end-user. When models are complex, they should definitely be tested. But other times, it seems to me the tests are redundant. For instance, do you test method foo if it is only called by bar, and bar is tested? What if bar is a simple helper method on a model that is used by and easily testable in a Cucumber feature? Do you test the method in rspec as well as Cucumber? I find myself struggling with this, as writing more tests take time and maintaining multiple "versions" of what is effectively the same behaviors, which makes maintaining the test suite more time intensive, which in turn makes changes more expensive.
In short, do you believe there is there a time when writing only Cucumber features is enough? Or should you always test at every level? If you think there is a grey area, what is your threshold for "this needs a functional/unit test." In practical terms, what do you do currently, and why (or why not) do you think it's sufficient?
EDIT: Here's an example of what might be "test overkill." Admittedly, I was able to write this pretty quickly, but it was completely hypothetical.
Good question, one I've grappled with recently while working on a Rails app, also using Cucumber/RSpec. I try to test as much as possible at every level, however, I've also found that as the codebase grows, I sometimes feel I'm repeating myself needlessly.
Using "Outside-in" testing, my process usually goes something like: Cucumber Scenario -> Controller Spec -> Model Spec. More and more I find myself skipping over the controller specs as the cucumber scenarios cover much of their functionality. I usually go back and add the controller specs, but it can feel like a bit of a chore.
One step I take regularly is to run rcov on my cucumber features with rake cucumber:rcov and look for notable gaps in coverage. These are areas of the code I make sure to focus on so they have decent coverage, be it unit or integration tests.
I believe models/libs should be unit tested extensively, right off the bat, as it is the core business logic. It needs to work in isolation, outside of the normal web request/response process. For example, if I'm interacting with my app through the Rails console, I'm working directly with the business logic and I want the reassurance that methods I call on my models/classes are well tested.
At the end of the day, every app is different and I think it's down to the developer(s) to determine how much test coverage should be devoted to different parts of the codebase and find the right balance so that your test suite doesn't bog you down as your app grows.
Here's an interesting article I dug up from my bookmarks that is worth reading:
http://webmozarts.com/2010/03/15/why-not-to-want-100-code-coverage/
Rails has a well-tested codebase, so I'd avoid re-testing stuff that is covered in those steps.
For example, unless it is custom code, it is pointless to test the results of validations at unit and functional levels. I'd test them at the integration level though. Cucumber features act as specifications for your project, so it is good to specify that you need a validation for x and y, even if the implementation is a single line declaration in the model.
You usually don't want to have both Cucumber stories and RSpec controller specs/integration tests. Pick one (generally Cucumber is the better choice, except for certain special cases). Then use RSpec for your models, and that's all you need.
I test complex model/lib methods with rspec then the main business logic in web with cucumber, so I'm sure that the main features of the web will work 100%, then if I got more time and resources I test everything else.
Its easier to write simple specs for simple methods. Its much easier than writing cukes.
If you keep your methods simple - and keep your specs simple - by testing only the logic inside that method - you will find joy in unit testing.
If anything is redundant its cucumber tests. If you have well tested models and lib your software should work.
The purpose of Cucumber is not to run integration tests. Cucumber, an in general BDD, works as a communication platform, a way to improve the "talk" inside a big team of developers an non-developers that are developing big and complex software. BDD is very useful to communicate a model an its domain at the same level to everybody in the team, even if they don't know anything about computers.
If that is not your scenario, don't use cucumber, because you don't need it. Use rspec and capybara to test your JS code and your integration tests.

Resources