I am trying to figure out a way to execute certain integration tests against an in memory DB (H2) and others against our Oracle test DB. Maybe its my limited test writing experience but it seems that some tests (such as search querying) are more suited for in memory as I can control the data set queried, and others such as testing transactions/persistence would benefit from going against our REAL schema and DB (Oracle).
I can think of 2 approaches but do not know how to implement either:
add a new test phase so that I can have integration-test-in-mem and integration-test (using oracle) and have different tests run in different phases and configure each for the different DB
have each test control which datasource is used
I would prefer the first as its cleaner and I don't have to pollute my test with logic to control which datasource it uses.
Also, the second is not simply setting different datasources by domain - I want to reuse the same domain in different tests against different DBs.
Any ideas appreciated and if you've done this please share! We do use SPOCK.
Here is a blog article I've found on adding custom test phases/types by Luke Daley. Has anyone implemented this? Now that I've read that and understand terminology better I think what I would like to do is set up new types - not phases. Unfortunately though since we are using spock we are already basically using a custom type. Though we could leave spock as one of the 2 types and potentially create a 'SPOCK-IN-MEM' type although this may require redefining the spock type which might not work. Any advice welcome. I would say that this seems to come up often enough (I've sen this question asked by others in other forums) that there should be a simpler way to go about it.
One more finding. There is an environment plugin for spock which adds an annotation to have tests run ONLY for the environment annotated. Its reusing the ignored tests capability of spock and is quite small, simple, and clean. The only downside is its for spock which is not an issue for our group.
A simpler way of defining phases would be nice - like a naming convention. It would be nice to be able to define phases/types with just a directory naming convention such as test//. Just create the folders and away you go. Then you could control execution by just explicitly setting phase/type/env in args when running test-app.
Related
I have a question with regards to using the unit of work with the repository to handle transactions across multiple modules.
I have two modules called customer and warehouse and they both have their own databases.
At the moment I use the unit of work to update each module independently of each other. What I want now is to update both modules at the same time and if one of them is invalid then neither are committed.
Is this possible with the unit of work pattern?
I don't think a true unit of work pattern is possible in this scenario. I'd be looking to a service class that used both repositories.
Edit - okay it might be possible - this guy refers to a solution that looks plausible:
Unit of Work with multiple Data Sources?
I'd still look at just wrapping the existing repositories in service/manager classes and keeping things simple.
I have an ASP.Net MVC project and I thought I could use a tool like MS Test or NUnit to perform regression testing from the controller layer down to the database, however I hit an issue where tests are not designed to run in order (You can use ordered tests in MS Test, but the tests still run concurrently) and the other problem is how to allow the data created from one test accessible to another?
I have looked at Selenium and WatiN but I just wanted to write something that is not dependent on the UI layer which is most likely going to change an increase the amount of work to maintain the tests.
Any suggestions? Is it just the wrong tool for the job? Should I just use Selenium/WatiN?
Tests should always be independent of each other, so that running order doesn't matter. If your tests depend on other tests you are losing control of what you are testing.
WatiN, and I'm assuming Selenium, won't solve your ordering problem. I use WatiN and NUnit for UI automation and the running order is not guaranteed, which initially posed similar problems to what you're seeing.
In the vein of what dskh answered, you want independent tests, and I've done this in two ways for Integration / Regression black-ish box testing.
First: In your test setup, have any precondition data values setup so you're at a known "good state". For system regression test automation, I've got a number of database scripts that get called to reset data to a known state; this adds some dependencies so be conscious of the design. Note: In straight unit testing, look at using mock objects to take out dependencies and get your test to be "testing one thing". Mock objects, stubbing method calls, etc is the way to go if you can, which based on your question sounds likely.
Second: For cases where certain things absolutely had to be setup in a certain way, and scripting them to test setup added a ridiculous amount of necessary system internal knowledge (eg: all users setup + all permissions setup + etc etc etc) a small number of "bootstrap" tests were setup, in their own namespace to allow easy running via NUnit, to bootstrap the system. Keeping the number of tests small and making sure the tests were very stable was paramount. Now, on a fresh install, the bootstrap tests are run first and serve as advanced smoke tests; no further tests are run if any of the bootstrap tests fail. It is clunky in some ways, but the alternatives were clunkier or more time/resource/whatever consuming.
Update
The link below (and I assume the project) is dead.
Best option maybe using Selenium and the Page Object Model.
See here: http://code.tutsplus.com/articles/maintainable-automated-ui-tests--net-35089
Old Answer
The simplest solution I have found is Rob Conery's Qixote:
https://github.com/robconery/Quixote
It works by firing http requests and consuming responses.
Simple to set up and use and provides integration testing. This tool also allows a series of tests to be executed in order to create test dependencies.
Things started off simple with my fake repositories that contained hard-coded lists of entities.
As I have progressed, my shared fake repositories have become bloated. I am continually adding new properties and new entities to these lists. This is making it extremely difficult to maintain and it is also difficult to see what the test is doing. I believe this is an anti-pattern called "General Fixture".
In researching ASP.NET MVC unit tests, I have seen two methods for preparing repository fixtures that are passed on to the controllers.
Create hard-coded fake repositories that are shared among all tests
Mock parts of the repositories within each test
I'm tempted to explore option #2 above but I've read that it's not a good idea to mock repositories and it seems quite daunting in the scenarios where I'm testing a controller that operates on collections (i.e. with paging/sorting/filtering capabilities).
My question to the community...
What methods for preparing repository fixtures work well beyond rudimentary examples?
I dont think you should only be choosing one of the two options. There are cases when using a fake repository would be better, and there are cases when mocking would be better. I think you should assess what you need on a case by case basis. For example, if you are writing a test for a UsersService that needs to call an IUserRepository.DoesUserExist() that returns a boolean, then you wouldnt use a fake repository, its easier just to Mock a call to return true or false.
Moq is awesome.
For a similar reason on a new project I'm looking into using an ORM (NHibernate in my case). That way I can point it at an "in-memory" SQLLite instance (rather than SQL Server) and it should be far easier to set up / maintain (I hope). That way I will only need to mock the repository if I have a requirement to test particular scenarios (such as time-outs, etc)
If you are using your Unit Tests for TDD, download Rhino Mocks and use optione #2.
For the most part, we go with test specific repository mocks. I've never seen advice not to do this myself and I find that it works great. For the most part, our repository methods and therefore our mocks only return single models or lists of models (not data contexts) so it is easy to create the data specific for each test and isolated to each query. This means that we can mock whatever data we like without affecting other tests or queries in the same test. It is very easy to see why the data was created and what it is testing.
I have been on teams have also decided to create shared mock data from time to time as well. I think the decision was generally made because the routines generated dynamic queries and the data required to mock all of the tests resulted in a good portion of the database being duplicated. However, in retrospect, I probably would have suggested that only the resulting queries need to be checked, not the contents returned from the database. And thus, no data at all would be mocked though, it would have required some code changes. I only mention this to illustrate that if you can't see to find a way to make option 2 work, maybe there is a way to refactor the code to make it more testable.
I'm using SpecFlow to do some BDD-style testing. Some of my features are UI tests, so they use WatiN. Some aren't UI tests, so they don't.
At the moment, I have a single StepDefinitions.cs file, covering all of my features. I have a BeforeScenario step that initializes WatiN. This means that all of my tests start up Internet Explorer, whether they need it or not.
Is there any way in SpecFlow to have a particular feature file associated with a particular set of step definitions? Or am I approaching this from the wrong angle?
There is a simple solution to your problem if you use tags.
First tag you feature file to indicate that a particular feature needs WatiN like that:
Feature: Save Proportion Of Sample Pool Required
As an <User>
I want to <Configure size of the Sample required>
so that <I can advise the deployment team of resourcing requirments>.
#WatiN
Scenario: Save valid sample size mid range
Given the user enters 10 as sample size
When the user selects save
Then the value is stored
And then decorate the BeforeScenario binding with an attribute that indicates the tag:
[BeforeScenario("WatiN")]
public void BeforeScenario()
{
...
}
This BeforeScenario method will then only be called for the features that use WatiN.
Currently (in SpecFlow 1.3) step-definitions are global and cannot be scoped to particular features.
This is by design to have the same behavior as Cucumber.
I asked the same question on the cucumber group:
http://groups.google.com/group/cukes/browse_thread/thread/20cd7e1db0a4bdaf/fd668f7346984df9#fd668f7346984df9
The baseline is, that the language defined by all the feature files should also be global (one global behavior of the whole application). Therefore scoping definitions to features should be avoided. Personally I am not yet fully convinced about this ...
However your problem with starting WatiN only for scenarios that need UI-Integration can be solved in two different ways:
Tags and tagged hooks: You can tag your scenarios (i.e with #web) and define ina BeforeScenario-Hook that only should run for scenarios with a certain tag (i.e. [BeforeScenario("web")]). See the Selenium integration in our BookShop example: http://github.com/techtalk/SpecFlow-Examples/blob/master/ASP.NET-MVC/BookShop/BookShop.AcceptanceTests.Selenium/Support/SeleniumSupport.cs
We often completely separate scenarios that are bound to the UI and scenarios that are bound to a programmatic API (i.e controller, view-model ...) into different projects. We tried to illustrate this in our BookShop example: http://github.com/techtalk/SpecFlow-Examples/tree/master/ASP.NET-MVC/BookShop/ .
Check this out (new feature in SpecFlow 1.4): https://github.com/techtalk/SpecFlow/wiki/Scoped-Bindings
I originally assumed that a step file was associated with a particular feature file. Once I realized this was not true it helped me to improve all my SpecFlow code and feature files. The language of my feature files is now less context depended, which has resulted in more reusable step definitions and less code duplication. Now I organize my step files according to general similarities and not according to which feature they are for. As far as I know there is no way to associate a step with a particular feature, but I am not a SpecFlow expert so don't take my word for it.
If you still would like to associate your step files with a particular feature file, just give them similar names. There is no need for it to be forced to only work for that feature even if the step code only makes sense for that feature. This is because even if you happen to create a duplicate step for a different feature, it will detect this as an ambiguous match. The behavior for ambiguous matches can be specified in an App.config file. See
http://cloud.github.com/downloads/techtalk/SpecFlow/SpecFlow%20Guide.pdf
for more details the about App.config file. By default ambiguous matches are detected and reported as an error.
[edit]:
Actually there is a problem with working this way (having step files associated with feature files in your mind only). The problem comes when you add or modify a .feature file and use the same wording you have used before, and you forget to add a step for it, but you don't notice this because you already created a step for that wording once before, and it was written in a context sensitive manner. Also I am no longer convinced of the usefulness of not associating step files with feature files. I don't think most clients would be very good at writing the specification in a context independent manner. That is not how we normally write or talk or think.
Solution for this is to implement Tags & Scoped Binding with the test scenario which is related to Web or related to Controller / Core logic in code.
And drill down the scope for each scenario to any of the below mentioned Before / After execution
BeforeTestRunScenario
BeforeFeature
BeforeScenario
BeforeScenarioBlock
BeforeStep
AfterStep
AfterScenarioBlock
AfterScenario
AfterFeature
AfterTestRunScenario
Also consider using implementation-agnostic DSL along with implementation-specific step definitions. For example, use
When I search for 'Barbados'
instead of
`When I type 'Barbados' in the search field and push the Search button
By implementing multiple step definition assemblies, the same Scenario can execute through different interfaces. We use this approach to test UI's, API's, etc. using the same Scenario.
I encountered a situation where I've renamed a column, but was unaware that my views were still referencing the column by the old name.
This broke my web app, and I pushed these changes to my production server, thus learning the importance of a test suite.
I'm new to testing, so, I'm wondering: how I can catch problems caused by this kind of scenario?
Simple: Use the view in one of your tests. After the rename, the test will fail.
After some research I found this article. It explains functional tests, which is what I need to test my views/actions.
The documentation on rails functional testing seems poor, but the article I linked to above is exactly what I was looking for.
I just didn't understand where/how views are supposed to be tested, alas, in the functional tests.