We call many different external APIs in our system and now I'm looking for a system I can use to simulate those APIs so we can test ours in the Staging and Development environments?
Our application is written in Ruby on Rails 3.0 but since all the API calls to and from it are over HTTP there is no language dependency.
VCR will record the actual input from the webservice and then replay that feedback from then on.
To simulate it completely, you can use fakeweb. You'll record output to a file and have it sent back to your application.
This something called test mocking/stubbing and is a common practice. Basically you override the response code of the API call to return data w/o actually doing the HTTP request. Just search it for more details.
Related
Is there a way to mock requests when writing automated UI tests in Swift 2.0. As far as I am aware the UI tests should be independent of other functionality. Is there a way to mock the response from server requests in order to test the behaviour of the UI dependant on the response. For example, if the server is down, the UI tests should still run. Quick example, for login, mock if password failed then UI should show alert, however, if the login is successful the next page should be shown.
In its current implementation, this is not directly possible with UI Testing. The only interface the framework has directly to the code is through it's launch arguments/environment.
You can have the app look for a specific key or value in this context and switch up some functionality. For example, if the MOCK_REQUESTS key is set, inject a MockableHTTPClient instead of the real HTTPClient in your networking layer. I wrote about setting the parameters and NSHipster has an article on how to read them.
While not ideal, it is technically possible to accomplish what you are looking for with some legwork.
Here's a tutorial on stubbing network data for UI Testing I put together. It walks you through all of the steps you need to get this up and running.
If you are worried about the idea of mocks making it into a production environment for any reason, you can consider using a 3rd party solution like Charles Proxy.
Using the map local tool you can route calls from a specific endpoint to a local file on your machine. You can past plain text in your local file containing the response you want it to return. Per your example:
Your login hits endpoint yoursite.com/login
in Charles you using the map local tool you can route the calls hitting that endpoint to a file saved on your computer i.e mappedlocal.txt
mappedlocal.txt contains the following text
HTTP/1.1 404 Failed
When Charles is running and you hit this endpoint your response will come back with a 404 error.
You can also use another option in Charles called "map remote" and build an entire mock server which can handle calls and responses as you wish. This may not be exactly what you are looking for, but its an option that may help others, and its one I use myself.
Consider a Rails app that hits a (Sinatra app) API being developed separately from the Rails app. I want to test an API call from within the Rails tests.
The API code:
post '/foo/create' do
...
I created a mock, but that doesn't make sense because it is just a copy of the API file. That stinks.
It is possible to require the API file in the test. But how to call it from RSpec? There is no route in the Rails app for it.
One option is to start the API and make the HTTP call from the Rails test, but this is smelly because:
You have to start the API server to run the Rails tests
Why should a Rails test make a HTTP request? Rack::Test simulates this.
I don't think this will work because the apps have different test databases, but share the same production database.
EDIT: The point of the test is that the API call creates records that the Rails app is expecting. So the Rails app needs to test the state of the database after the API call is made.
Well. The perfect answer for you is a gem to mock the answer like webmock. It will fake a response when acessing that url, so on the test your app will make the requisition as it was for real, only that before it hits the web, it will hit your mock and respond with the desired answer.
I need to test some HTTP components in my Delphi app. I use DUnit and want to add some automation into testing.
So my testing code need to start the local HTTP server, configure it (for example, prepare for connection break in 3 seconds, or to simulate low bandwidth, or to ask for login/password etc), run my unit-tests and close HTTP server.
Are there some HTTP servers available exactly for Delphi/DUnit?
I know that Mozilla team have such server, but it's not too easy to integrate it into DUnit.
I use Indy's TIdHttpServer to serve stuff in the same process.
This approach allows me to check that the requests coming in are correct, as well as checking the behaviour from the client end.
Also, you can individually set up the server on a testcase by testcase basis, making your unit tests easier to understand (meaning that you don't have a piece of the 'test' somewhere else).
While #Nat's answer is workable, the setup code for stubbing requests and their associated responses using Indy can be pretty heavy. Also, when working in this way, I found the test code to be quite a time drain in both writing and debugging. Hence I built a framework Delphi WebMocks for DUnitX (sorry, not DUnit) to do exactly this with a syntax that should be straight-forward using HTTP terminology.
For example, the setup code is as simple as:
WebMock.StubRequest('GET', '/')
.ToRespond
.WithHeader('Content-Type', 'application/json')
.WithBody('{ "value": 123 }');
You can also verify the requests actually got made like:
WebMock.Assert
.Post('/')
.WithHeader('Content-Type', 'application/json')
.WithBody('{ "value": 123 }')
.WasRequested;
If the assertion fails, it will fail the DUnitX test.
There is a lot more to it in terms of how you can specify request matching and responses so please check it out if you think you'd find it useful.
You may use unit test / DUnit to construct automatic integration tests. Say, you HTTP components as http client make calls to a Web service. You may make your own mock Web service, or just use any public Web service, like one of those from Google or Amazon. So you just need to create a Google or Amazon developer account, and consume some basic service functions for testing.
If you're testing SOAP services, use SoapUI to stand up a "mock" service based on your WSDL.
You can have it return a variety of responses (either sequentially, or use some simple scripting to match responses to the request contents.) I've done this by matching the "request ID" (just a GUID) in my request sent from the DUnit test, to a response in the SoapUI. It's a simple xpath query to match them up.
You can have it return "canned" errors/exceptions, and of course when it's not running, you'll have the "nobody's home" test case.
I call a third party web service right now as part of my application. I am using the RestClient gem in order to do this. There are a ton of tools available to do the same thing, so that should not matter.
What I'm curious about is having good enough tests, nothing too fancy, where I can simulate how my application responds when the third party web service is unavailable for whatever reason. Be it that I exceeded a rate limit or a timeout due to network latency/complications, I just want to be able to take something like an HTTP status code and test what my application does in that event.
What's the best way to do this with Test::Unit? Right now the call to the third party service is encapsulated inside of one of my controllers. I have a simple module with some wrapper methods for the different end points of the remote service. I just want to make sure that my application does the right things when the service is or isn't available.
Is using an additional framework next to Test::Unit that can 'stub' the right way to go about doing this? Obviously I can't force a network timeout and starting to hack with things like IPtables just for tests is not worth the time. I'm sure this problem has been solved a million times as integrating things such as Facebook and Twitter into web applications is so popular these days. How do you test for failure when reaching those APIs in a robust/controlled format?
I would recommend using something like webmock to mock all of your http requests (not just to mock a failed request); it'll greatly speed up your test suite instead of having to actually hit the third party service every time you run the tests.
Webmock supports Rest Client and Test::Unit. Just put this code in your test/test_helper.rb file:
require 'webmock/test_unit'
As an example, to test a network timeout:
stub_request(:any, 'www.example.net').to_timeout
RestClient.post('www.example.net', 'abc') # ===> RestClient::RequestTimeout
railscast: 291 (subscriber only) talks about testing with VCR and rspec (i know, not it's not Test:Unit)
anyway you could look into using VCR for this sort of thing
Edit #2: Does anyone have a good method of testing the "middle" of a client-server application where we can intercept requests and responses, fake the client or server as needed, and which provides self-documentation of the api?
Cucumber might be a good solution in many cases, but it's not quite what I'm looking for. And this middle layer should be client/server implementation agnostic. (e.g., black-box).
Our client-server model is a ruby-on-rails server with a Flex client, using a RESTish interface with JSON as the data format. So anything the client posts to the server is usually a single JSON parameter. The server does it's thing and responds with a pure JSON model.
We have standard rails testing on the server and we're working on getting proper FlexUnit tests completed on the client (it's a moving target). However, there's a debate in my team about the effectiveness of the current testing model, since every change on the server seems to break part of the API. This is telling me that there is both a problem with API communication (between team members, self-documentation in code, etc..), and a lack of proper API sanity testing.
So I've been questioning whether we need to have a mock client for testing the server at a pure JSON level (without all the other complexities of a rich client), and possibly a mock-server for doing the same thing with the rich client. This would serve two purposes, to document the API and to provide more thorough testing of the API itself.
The reason there's a debate is that the rails guy claims that the rails integration testing is sufficient for testing all the server requests, and the middle-ground testing environment would simply be redundant.
So the question here is, given our situation, how should be go about self-documenting the API, and how should we test the API itself?
EDIT:
We have routes like /foo/444/bar.js, but the parameters can be virtually any complex JSON string depending on the action, e.g.:
json={
"foo":{
"x":1,
"y":2
},
"bar":[1,2,3,4,5]
}
but besides manually-edited API docs, there's no self-documentation. The rails controller often just deserializes and applies changes directly to the model. Would be nice to have common tests to tell us when it's changed, and what's expected.
I just started looking at this web functional testing tool called Maxq and I think it has the potential to solve your problem, Maxq acts as a proxy server between your web client and server application.
It sits on top of Junit so that means you could do proper unit testing for your API by asserting the behavior and responses of calls to your server app.
It basically captures and records all the requests you make from a web client and the responses you get back from a server, it also has the ability to generate test scripts of your request which you could use to play back and test on any server.
You should try it out http://maxq.tigris.org/
You can think of it as two different projects.. if you had two project, you would've writing two separate test suites right?
You should start by establishing the API between the server and the client- as if you wont have any communication between the teams after you start implementhing.
Then you build the client that consume the API and a server that produce the API (or the tests first, if you TDD).
For testing, one team need a mock-server to supply fake API responses to test the client, and the other team need to test the produced data of the server (i.e, the second team is using ails integration testing like your rails guy claims)
I would recommend Cucumber. It allows you to write specific tests for your application by emulating a browser. This way you can send requests and validate the JSON response easily.
One way you can do this is controller tests using rspec (you can also use test::unit)
describe PersonApiController
# GET /person/1.json
it "should respond with a person" do
person = Person.create(:name => "scott")
get :show, :id => person.id, :format => 'json'
response.should be_success
response.body.should have_selector('name', :content => person.name)
end
end