I need some advice about efficient way of writing integration tests for our current ASP.NET MVC application. Our architecture consists of:
A Service Layer below Controllers
Service Layer uses Repositories and Message Queue (sometimes) to send messages to external applications.
What I think should be done is to:
Write behavioral unit tests for all pieces in isolation. So, for example, we should unit test the Service Layer while mocking Repositories and Message Queues.
Same goes for Controllers. So we mock the Service Layer and unit test the Controller.
Finally, we write separate integration tests for Repositories against a real database and for Message Queue classes against real message queues to ensure they persist/retrieve data successfully.
My Questions:
Are there any other types of integration tests that we need to write?
Do we still need to write integration tests for Services with the real Repositories and Message Queues?
Do we still need to write integration tests for Controllers with the real Services (which in turn consists of real Repositories and Message Queues).
Any advice would be greatly appreciated.
Cheers
Here at office we do not test against real services.
We have test in service side
We are testing controllers as unit tests, we use mock in these unit tests
Yet we don't have a integration test :-(
We were advised to not use real services for testing, we use Rhino Mocks to simulate answers for methods being called inside controller actions.
So the problem is still about how to do integration tests in a good way.
Maybe this could help you:
http://www.codeproject.com/Articles/98373/Integration-testing-an-ASP-NET-MVC-application-wit.aspx
but I am still looking for a better understanding about its possibilities.
I'm using Ivonna for testing various levels of isolation. I'm able to test that a particular Url with some particular POST data hits the expected Action method with the expected arguments, and at the same time I'm able to stub/mock external services.
I've been using SpecsFor.MVC for integration testing. Essentially you write code in a test class and the framework runs a browser interpreting your C# into browser actions. It's beautifully simple to use and setup.
Related
We've got a suite of UI tests for our app written using KIF which I'd like to convert to use the new Xcode UI test framework.
The app is a client of a Rest AI whose responses we're currently mocking by using NSURLProtocol to serve predefined JSON files in response to the GETs, POSTs, PUTs, etc... All the tests are defined using the data in these files, so I want to continue using them. The same endpoints on the server return different data at different points in the tests, so I can't mock them up-front, I need to be able to call a method while the test is running to mock the server's next response.
Unfortunately, using NSURLProtocol inside an Xcode UI test doesn't affect the tested app, and I've only seen ways of sending data to the app via launch arguments or environment, such as in this answer. I need to mock them differently at different stages during my tests. How can I mock network requests from inside the UI test in a way that changes during the test? Or how can I communicate with the app being tested so I can get it to mock the requests itself?
We've developed an embedded HTTP server in pure Swift, you can run the HTTP server with simple mock response handler in your test case, then pass the URL to the target app via environment variable like API_BASE_URL. I wrote an article about how to do it
Embedded web server for iOS UI testing
Basically there are two libraries, one is Embassy, it's the simple async HTTP server. Another one is Ambassador, a simple web framework for mocking API endpoints.
We have been facing the exact same problem trying to migrate from KIF to UI Tests. To overcome the limitations of UI Tests with regards to stubbing and mocking we built a custom link between the app and the test code by using a web server that is instantiate on the app. The test code sends HTTP requests to the app that get conveniently translated to a stubbing request (OHHTPStub), a NSUserDefault update, upload/download an item to/from the app. It's even possible to start monitoring network calls when you need to check that specific end points get called. It should be fairly simple to add new functionalities should you feel there's something missing.
Using the library is extremely simple, check it out on our github repo
You could either mock them using OHHTTPStubs, or by writing your own stubs.
In a nutshell, you should stub requests with a NSURLSession subclass and inject the stubbed NSURLSession into your networking layer.
The launchEnvironment property might be useful to pass mocked urls to the test.
I am developing in a micro services architecture, currently each service is developed in ruby.
One of the advantages of decoupling services is a future ability to refactor a service from ruby to another technology, let's say Node.js
When I will do this refactor some time in future, I would want my integration tests to keep functioning.
Ideally, I would want to develop the integration tests in rspec (ruby), and to keep them functioning on a non-rails server via HTTP.
Is that possible with rspec?
Which tool can provide this requirement?
RSpec is a behavior-driven development (BDD) framework for the Ruby programming language. It is used for unit testing. It can;t be integrated with another tech stack
However, If it is all about doing the integration testing for your web services, I think Cucumber is something that will help you to achieve the same.
Cucumber as a tool is technology agnostic. However, you will have to design your integration tests in such a way that any helper libraries to execute or write the testcases could be easily replaced with the libraries of other language when you are changing you tech. stack for the webservices/test cases.
As long as you test the endpoints exposed by the webservice and not how they are implemented , they should keep functioning.
You could achieve that by writing your integration tests in spock which is a beautiful BDD framework in groovy, using that you can hit the endpoints like
GET /cars ---> Check for the list of cars
POST /cars ---> Add a car
GET /cars/1 ---> Get a specific car details
PUT /cars/1 ---> Edit a specific car
Now when you are testing like this, it doesn't matter what the implementation is coz you are always testing the interface.
I was through the integration testing documentation for Grails and I noticed this line:
Grails does not invoke interceptors or servlet filters when calling actions during integration testing.
source: http://grails.org/doc/latest/guide/testing.html#integrationTesting
Why is this? It would make my testing life a lot easier if Grails did invoke the filters. My project makes heavy use of filters and many of my controllers depend on my filters in order to do anything.
I was thinking about it and it seems like one could use groovy black magic to automatically execute the filters in an integration test. Has anyone already done this, or is this something that I'd have to write?
The environment used for integration tests is similar to what's available during run-app; Spring is active, plugins are loaded, a database is available, etc. Pretty much everything except for a web server. Without a server, there are no real requests, no servlet filters, and no Grails filters (which are wrappers for Spring controller HandlerAdaptors). When testing controllers you can access a request and response thanks to the Spring servlet API mock classes. But none of the real web request lifecycle is active, it's all just simulated.
You're right that it should be doable with some custom code. When you do this, please consider making it a plugin so we can all share :)
Lets say; I am developing a Web Application which talks to a RESTful web service for certain things.
The RESTful web service isn't third party, but is being developed parallely with main application (A good example would be, E commerce application and payment processor; or Social network and SSO system).
In such a system, acceptance(cucumber) or functional tests can be done in two ways:
By mocking out all external calls using object level mocking library; such as Mocha or JMock.
By doing mocking at http level, using libraries such as webmock.
By actually letting the main application make the actual call.
Problem with #1 and #2 is, if API of underlying application changes; my tests would keep passing and code will actual break and hence defeating the purpose of tests in first place.
Problem with #3 is, there is no way I can do rollback of data, the way test suite does on teardown. And I am running my tests parallely and hence if I let actual web services go, I will get errors such as "username taken" or stuff.
So the question to community is, what is the best practice?
Put your main application in a development or staging environment. Spin up your web service in the same environment. Have the one call the other. Control the fixture data for both. That's not mocking; you're not getting a fake implementation.
Not only will this allow you to be more confident in real-world stability, it will allow you to test performance in a staging environment, and also allow you to test your main application against a variety of versions of the web service. It's important that your tests don't do the wrong thing when your web service changes, but it's even more important that your main application doesn't either. You really want to know this confidently before either component is upgraded in production.
I can't see that there is a middleground between isolating your client from the service and actually hitting the service. You could have erroneously passing tests because the service has changed behavior, but don't you have some "contract" with the development team working on that service that holds them responsible for breakage?
You might try fakeweb and get a fresh copy of expected results each day so your tests won't run against stale data responses.
I'm starting to TDD and I want to know if it is a bad practice to add a service reference to test my project or if I just mock a fake service on my tests that depends of the WCF service.
Yes it is a bad practice to add service references to a unit testing project. You could use the generated service contract interface to mock the real WCF service behavior in the test.
Having a service ref is possible a bad way to go, you could consider implementing the Gateway Pattern, e.g. IMyFooServiceGateway as an additional abstraction layer. This way you might be able to make the app more loosely coupled and gain some additional testability (in you test project you'd reference the segregated assembly containing IMyFooServiceGateway and either hand-create a mock that implements IMyFooServiceGateway or use a mock framework like Rhino Mocks to create one for you.
Rather than using a service reference you could mock out a ChannelFactory using your service contract.
If the project which is the target for the tests has a service reference, you should not have to add an additional service reference to the test project.
When a service reference is added to a project, usually the code generated for it contains a publicly accessible interface to the service. The test project therefore only need reference the target project in order to see this interface, which can in turn be dropped into your mocking library of choice or mocked out manually.
It's worth noting though that the interface generated doesn't necessarily follow the typical "IFoo" naming convention for interfaces so it isn't immediately obvious.