two applications, how to test the communications - ruby-on-rails

I have two applications, say, A and B, talking to each other via API, now I am writing cucumber tests for A, I have two options:
Just test if the API is sent to B and stub the response from B
Setup test data on B from A (since i am testing A), and send real request to B, and record the request/response with VCR
I prefer option #1, but my coworker says it needs at least one real request to make sure the system (including A and B) is working.
My concern is:
How to prepare testing data for B from A's tests?
It's fragile to mix them together, anything changed on B may cause failure on A
Any comments?

For the majority of your tests, stub the request/response, that way the test will pass when offline, or what not.
For one test, do a simple test that the external service is behaving as your stubs and mocks say it should.
E.G. Doing a get request still returns JSON with the attributes you expect to ensure your mocks are valid.
For the most part, "Up time" for an external service shouldn't be monitored by your test suite. Just that it behaves how you expect it to.
For the uptime concern you should look at the sysadmin side with Nagios, Pingdom, Pagerduty or what not.

You are writing cucumber test, means it is an integration test.
For integration test, you'd better not mock anything, it's the last safety guard to keep your application save.
So you'd better send at last once real request to make sure your request is correct and what's more you can repeat this real request at any time.
the problem of solution 1:
You can not make sure B changing API implement
You can not make sure A send correct parameter to B
It's hard to mock complex request
So I sugguest create a sandbox app for B, make real request

Related

How to get to Authorized part in Xcode UI Tests

I have the app with login screen and the screen that appear after login (authorized part).
What is the best approach to test these screens from Authorized part?
I have several ideas:
1) Somehow I need to remove all the data from keychain before each test and then I go through the entire flow each time to get to the first screen after login. When I need to send a request to backend to login, I wait for the main screen using
let nextGame = self.app.staticTexts["Main Screen Text"]
let exists = NSPredicate(format: "exists == true")
expectation(for: exists, evaluatedWithObject: nextGame, handler: nil)
waitForExpectations(timeout: 5, handler: nil)
2) I pass some arguments here
app = XCUIApplication(bundle:….)
app.launchArguments = [“notEmptyArguments”:”value”]
app.launch()
So I can pass a fake token and out backend will accept this token, so that my app will know that it has to route me to the Main Screen and all the requests will be successful because my Network Service has this fake token
But I fill that it's a not vary safe way.
Do you have any ideas what is the best approach and may be you can give advice of a better approach?
The second idea you mentioned is a good way to skip login screen in tests. Furthermore, implementing token passing will be helpful to developer team as well. Those launch arguments can be stored at running schemes settings.
Also, if you implement deep linking in the same manner it will bring even more speed enhancements for both QA and developer team.
Surely, these "shortcuts" shall be only accessible while running a debug configuration (using #if DEBUG...)
In my opinion your Login Service or whatever service your app might need to perform or show some use cases, should be all mocked. That means in your automated unit/ui testing environment your app is going to talk to mocked service implementations, that means that the login service or authorization service response should be mocked to either be success or failure, so you can test both of them.
To achieve that your services should all be represented as interfaces/protocols and the implementation/implementation details should be in either the production, development or automated testing environment.
I would never involve any networking in automated testing. You should create a mock implementation of your authorization service for example that in automated test environment could be mocked to either give a response of success or failure depending on the test you are running (and this setup you can do in the setup() method maybe).
The most authentic test suite would sign in at the beginning of each test (if needed) and would sign out if appropriate during teardown. This keeps each test self-contained and allows each test to use a different set of credentials without needing to check if it's already signed in/needs to change to a different user account, because tests will always sign out at the end.
This is not a fool-proof method as it may not always be possible for teardown code to execute correctly if a test has failed (since the app may not be in the state that teardown expects, depending on your implementation) but if you are looking for end-to-end tests, using only the codepaths used by production users, this is one way you could do it.
Introducing mocking/stubbing can make your tests more independent and reliable - it's up to you to choose how much you want to mirror the production user experience in your tests.

Rails http request itself in tests hangs

Problem
Making an HTTP request from a model to a route on the same app results in request timeout.
Background
Why would you want to http request itself rather than call a method or something?
Here is my story: there is a rails app A (let's call it shop) and a rails app B (let' call it warehouse) that talk to each other over http.
I'd like to be able to run both of them in a single system test to test end-to-end workflow. Rails only runs a single service, but one can mount app B as a rails engine into the app A, effectively having two apps in a single service. However, they still talk to each other over http and that's the bit that does not work.
Thoughts
It looks as if the second request hits some kind of a thread lock around active record or something. The reason I thinking about active record, is that I was able to make an http call to itself from the controller (that is, before active record related code kicked in)
Question
Is it possible to work around that?

Mock API Requests Xcode 7 Swift Automated UI Testing

Is there a way to mock requests when writing automated UI tests in Swift 2.0. As far as I am aware the UI tests should be independent of other functionality. Is there a way to mock the response from server requests in order to test the behaviour of the UI dependant on the response. For example, if the server is down, the UI tests should still run. Quick example, for login, mock if password failed then UI should show alert, however, if the login is successful the next page should be shown.
In its current implementation, this is not directly possible with UI Testing. The only interface the framework has directly to the code is through it's launch arguments/environment.
You can have the app look for a specific key or value in this context and switch up some functionality. For example, if the MOCK_REQUESTS key is set, inject a MockableHTTPClient instead of the real HTTPClient in your networking layer. I wrote about setting the parameters and NSHipster has an article on how to read them.
While not ideal, it is technically possible to accomplish what you are looking for with some legwork.
Here's a tutorial on stubbing network data for UI Testing I put together. It walks you through all of the steps you need to get this up and running.
If you are worried about the idea of mocks making it into a production environment for any reason, you can consider using a 3rd party solution like Charles Proxy.
Using the map local tool you can route calls from a specific endpoint to a local file on your machine. You can past plain text in your local file containing the response you want it to return. Per your example:
Your login hits endpoint yoursite.com/login
in Charles you using the map local tool you can route the calls hitting that endpoint to a file saved on your computer i.e mappedlocal.txt
mappedlocal.txt contains the following text
HTTP/1.1 404 Failed
When Charles is running and you hit this endpoint your response will come back with a 404 error.
You can also use another option in Charles called "map remote" and build an entire mock server which can handle calls and responses as you wish. This may not be exactly what you are looking for, but its an option that may help others, and its one I use myself.

HTTP server for unit tests in Delphi

I need to test some HTTP components in my Delphi app. I use DUnit and want to add some automation into testing.
So my testing code need to start the local HTTP server, configure it (for example, prepare for connection break in 3 seconds, or to simulate low bandwidth, or to ask for login/password etc), run my unit-tests and close HTTP server.
Are there some HTTP servers available exactly for Delphi/DUnit?
I know that Mozilla team have such server, but it's not too easy to integrate it into DUnit.
I use Indy's TIdHttpServer to serve stuff in the same process.
This approach allows me to check that the requests coming in are correct, as well as checking the behaviour from the client end.
Also, you can individually set up the server on a testcase by testcase basis, making your unit tests easier to understand (meaning that you don't have a piece of the 'test' somewhere else).
While #Nat's answer is workable, the setup code for stubbing requests and their associated responses using Indy can be pretty heavy. Also, when working in this way, I found the test code to be quite a time drain in both writing and debugging. Hence I built a framework Delphi WebMocks for DUnitX (sorry, not DUnit) to do exactly this with a syntax that should be straight-forward using HTTP terminology.
For example, the setup code is as simple as:
WebMock.StubRequest('GET', '/')
.ToRespond
.WithHeader('Content-Type', 'application/json')
.WithBody('{ "value": 123 }');
You can also verify the requests actually got made like:
WebMock.Assert
.Post('/')
.WithHeader('Content-Type', 'application/json')
.WithBody('{ "value": 123 }')
.WasRequested;
If the assertion fails, it will fail the DUnitX test.
There is a lot more to it in terms of how you can specify request matching and responses so please check it out if you think you'd find it useful.
You may use unit test / DUnit to construct automatic integration tests. Say, you HTTP components as http client make calls to a Web service. You may make your own mock Web service, or just use any public Web service, like one of those from Google or Amazon. So you just need to create a Google or Amazon developer account, and consume some basic service functions for testing.
If you're testing SOAP services, use SoapUI to stand up a "mock" service based on your WSDL.
You can have it return a variety of responses (either sequentially, or use some simple scripting to match responses to the request contents.) I've done this by matching the "request ID" (just a GUID) in my request sent from the DUnit test, to a response in the SoapUI. It's a simple xpath query to match them up.
You can have it return "canned" errors/exceptions, and of course when it's not running, you'll have the "nobody's home" test case.

What is the recommended method of testing a JSON client-server api?

Edit #2: Does anyone have a good method of testing the "middle" of a client-server application where we can intercept requests and responses, fake the client or server as needed, and which provides self-documentation of the api?
Cucumber might be a good solution in many cases, but it's not quite what I'm looking for. And this middle layer should be client/server implementation agnostic. (e.g., black-box).
Our client-server model is a ruby-on-rails server with a Flex client, using a RESTish interface with JSON as the data format. So anything the client posts to the server is usually a single JSON parameter. The server does it's thing and responds with a pure JSON model.
We have standard rails testing on the server and we're working on getting proper FlexUnit tests completed on the client (it's a moving target). However, there's a debate in my team about the effectiveness of the current testing model, since every change on the server seems to break part of the API. This is telling me that there is both a problem with API communication (between team members, self-documentation in code, etc..), and a lack of proper API sanity testing.
So I've been questioning whether we need to have a mock client for testing the server at a pure JSON level (without all the other complexities of a rich client), and possibly a mock-server for doing the same thing with the rich client. This would serve two purposes, to document the API and to provide more thorough testing of the API itself.
The reason there's a debate is that the rails guy claims that the rails integration testing is sufficient for testing all the server requests, and the middle-ground testing environment would simply be redundant.
So the question here is, given our situation, how should be go about self-documenting the API, and how should we test the API itself?
EDIT:
We have routes like /foo/444/bar.js, but the parameters can be virtually any complex JSON string depending on the action, e.g.:
json={
"foo":{
"x":1,
"y":2
},
"bar":[1,2,3,4,5]
}
but besides manually-edited API docs, there's no self-documentation. The rails controller often just deserializes and applies changes directly to the model. Would be nice to have common tests to tell us when it's changed, and what's expected.
I just started looking at this web functional testing tool called Maxq and I think it has the potential to solve your problem, Maxq acts as a proxy server between your web client and server application.
It sits on top of Junit so that means you could do proper unit testing for your API by asserting the behavior and responses of calls to your server app.
It basically captures and records all the requests you make from a web client and the responses you get back from a server, it also has the ability to generate test scripts of your request which you could use to play back and test on any server.
You should try it out http://maxq.tigris.org/
You can think of it as two different projects.. if you had two project, you would've writing two separate test suites right?
You should start by establishing the API between the server and the client- as if you wont have any communication between the teams after you start implementhing.
Then you build the client that consume the API and a server that produce the API (or the tests first, if you TDD).
For testing, one team need a mock-server to supply fake API responses to test the client, and the other team need to test the produced data of the server (i.e, the second team is using ails integration testing like your rails guy claims)
I would recommend Cucumber. It allows you to write specific tests for your application by emulating a browser. This way you can send requests and validate the JSON response easily.
One way you can do this is controller tests using rspec (you can also use test::unit)
describe PersonApiController
# GET /person/1.json
it "should respond with a person" do
person = Person.create(:name => "scott")
get :show, :id => person.id, :format => 'json'
response.should be_success
response.body.should have_selector('name', :content => person.name)
end
end

Resources