Rails tests with various integrations - ruby-on-rails

I have a rails app that interacts with an external api (Salesforce) that relies upon external data sitting in a remote database. I've written a wrapper that wraps this code so that users can just call get_by_id(id) instead of writing the corresponding sql query.
I want to test this code, and I am not sure how I should go about it. Should I be hitting the Salesforce backend database for the tests, calling the real methods? Or should I just mock the results of the method calls? I am perpetually confused by what I should test...

You should write like a suite for Salesforce's interaction.
A basic principle of testing, is that your test should not fail because of external factors. However, your app should be able to recover from SalesForce's errors.
From Rails 4 Test Prescriptions
Unfortunately, interacting with a third-party web service introduces a
lot of complexity to our testing. Connecting to a web service is
slow—even slower than the database connections we’ve already tried to
avoid. Plus, connection to a web service requires an Internet
connection... Some external services are public—we don’t want to post an update to Twitter every time we run our tests, let alone post a credit-card payment to PayPal.
Also, the book has some guidelines,
A fake server, which intercepts HTTP requests during a test and
returns a canned response object. We’ll be using the VCR gem ...* An
adapter, which is an object that sits between the client and the
server to mediate access between them.
A smoke test, which goes from the client all the way to the real server...a full end-to-end test of the entire interaction. We don’t
want to do this often, for all the reasons listed earlier, but it’s
useful to be able to guard against changes in the server API.
An integration test, which goes from the client to the fake server.
This tests the entire end-to-end functionality of our application but
uses a stubbed response from the server.
A client unit test, which starts on the client and ends in the
adapter. The adapter’s responses are stubbed, meaning that the adapter
isn’t even making fake server calls. This allows us to unit-test our
client completely separate from the server API.
An adapter unit test, which starts in the adapter and ends in the
fake server. These tests are the final piece of the chain and allow us
to validate the behavior of the adapter separate from any client or
the actual server
By the way, I think the book is a must-have

Related

Rails External API Testing

I'm writing tests for my Rails app. My app communicates with an external API that processes customer payments, specifically BrainTree. Now, I want to make sure my app's class that communicates with BrainTree works properly e.g. that it submits user information and other parameters to BrainTree correctly. Making the goal to only test that BrainTree and my App are communicating properly.
One thing to note, is that BrainTree has a sandbox account. To test my class, should I:
Write a feature test using something like Capybara and Rspec and test it from a user's perspective e.g. user logs in, fills out form, submits payment etc.
Write a request spec that just submits the required information and examines the return values. This is what I would prefer but is tricky since BrainTree requires js, and I am not sure I can do have js in a request spec without monkey patching Rspec, which I'd rather not do since I am still fairly new to Rspec and testing in general.
Write both feature and request specs
Write a completely different type of test
I have a feature test in place, but it seems cumbersome to use to just test an external API since it needs to open a browser, fill out forms, etc. in my feature spec I'd rather stub the external API and test the API as a unit test. A request spec seems more efficient but the js requirement seems like a roadblock.
Is there a Best Practice to what I should do in my scenario above?
In general, you don't typically want to write tests only for an external service, but instead for your own code that tests against the responses you receive.
The best way I've found to stub a response from an external API is the VCR gem. This will get a legitimate response and save it for use in future runs. You can erase the stored response occasionally to ensure continued functionality.
Another approach to testing this is to use a fake service that mimics the API. Fake Braintree Gem provides such functionality and I've used this with a mix of VCR for other tests to ensure correct functionality. There's many other approaches but you can test to see which one fits your needs

In Rails, how can I stub a websocket message for a test?

The application is using Minitest on Rails 4 with Capybara.
I'd like to write an integration or feature test that stubs a websocket connection (the application uses Faye as a client) to return a specific message (like I'm used to doing with Webmock).
Is this possible? If so, can you provide an example? My research has not turned up any examples.
Your research hasn't shown up any examples because it's not really what you're supposed to be doing in feature tests. Feature tests are supposed to be end-to-end black box tests where you configure the data as required for the app to generate the desired results and then all interaction is done via the browser (no mocking/stubbing which technically alter your apps code). Additionally when connections between the browser and a 3rd party service are involved there is nowhere in your app where you can mock it.
It may be possible to stub a websocket connection from the browser with a programmable proxy like puffing-billy, however it's generally cleaner to produce a small fake version of the 3rd party service for testing purposes (sinatra app, etc) and point the browser at that rather than the original service when you need to craft custom responses. Additionally, there are a lot of fakes already out there, depending on what service you are using (fake-stripe, fake-s3, etc), which may provide the functionality you're looking for.

Test integration between two rails apps

I am developing a system composed of two different rails applications (server and client) which communicate via rest web services.
I have tests for each app individually, but I would like to add some test for the integration between the two platforms, to assert that one creates a request compatible with what the other is expecting.
Any hints would be appreciated.
I have a similar architecture and we are using VCR in order to record all server side responses and avoid make requests always. It could turn annoying and right now I'm looking for a way to clean data from server after every request
I think VCR could be a good start point in order to test integration between your apps.
You can find documentation here -> Relish Docs
I think there could be several approaches here, depending what you have implemented..
If the client Rails app has user interface, try to write Selenium tests to perform the integration test in your local dev environment or a staging environment that has both deployed. (not ideal if the interface is still a prototype, changing frequently...)
Maybe part of the client can be written as a Ruby gem (e.g. the communication rest api is a ruby gem). When the server app in testing environment, the server Rails app can use the Client gem to run integration test, i.e. call the module function, the same function is used by client. The client Rails app can also use the gem to make requests to the server. Here's a good guide to start migrating some of your reusable code to rubygem.

Testing Applications built on top of RESTful web services

Lets say; I am developing a Web Application which talks to a RESTful web service for certain things.
The RESTful web service isn't third party, but is being developed parallely with main application (A good example would be, E commerce application and payment processor; or Social network and SSO system).
In such a system, acceptance(cucumber) or functional tests can be done in two ways:
By mocking out all external calls using object level mocking library; such as Mocha or JMock.
By doing mocking at http level, using libraries such as webmock.
By actually letting the main application make the actual call.
Problem with #1 and #2 is, if API of underlying application changes; my tests would keep passing and code will actual break and hence defeating the purpose of tests in first place.
Problem with #3 is, there is no way I can do rollback of data, the way test suite does on teardown. And I am running my tests parallely and hence if I let actual web services go, I will get errors such as "username taken" or stuff.
So the question to community is, what is the best practice?
Put your main application in a development or staging environment. Spin up your web service in the same environment. Have the one call the other. Control the fixture data for both. That's not mocking; you're not getting a fake implementation.
Not only will this allow you to be more confident in real-world stability, it will allow you to test performance in a staging environment, and also allow you to test your main application against a variety of versions of the web service. It's important that your tests don't do the wrong thing when your web service changes, but it's even more important that your main application doesn't either. You really want to know this confidently before either component is upgraded in production.
I can't see that there is a middleground between isolating your client from the service and actually hitting the service. You could have erroneously passing tests because the service has changed behavior, but don't you have some "contract" with the development team working on that service that holds them responsible for breakage?
You might try fakeweb and get a fresh copy of expected results each day so your tests won't run against stale data responses.

Receiving REST response on local machine

I use a web service to convert files. The service returns the converted file as an HTTP POST, along with identifier data. My app receives the response, updates its database and saves the file to the appropriate location.
At least that's the idea, but how do I develop and test this on a local machine? Since it isn't publicly facing, I can't provide a directive URL. What's the best way to handle this? I want to keep the process as clean as possible, and the only ideas I can come up with have seemed excessively kludgey.
Given how common REST API development is, I assume there are well-established best practices for this. Any help appreciated.
The solution will change a bit depending on which server your using.
But the generally accepted method is using the loopback address: 127.0.0.1 in place of a fully qualified domain name. Your server may need to be reconfigured to listen on this IP address, but that's usually a trivial fix.
example: http://127.0.0.1/path/to/resource.html
You can use curl or even your browser if your application has a proper frontend. There are many other similar tools to test this from a command line, and each language has a set of libraries for establishing http connections and transferring data along them.
If your machine isn't accessible to the service you are using, then your only option would really be to build a local implementation of the service that will exercise your API. A rake task that sends the POST with the file and the info would be a nice thing so you could start your rails app locally, and then kick off the task with some params to run your application through its paces.
This is the case any time you are trying to develop a system that can't connect to a required resource during development. You need to build a development harness of sorts so that you can exercise all the different types of actions the external service will call on your application.
This certainly won't be easy or straight forward, especially if your interface to this external service is complicated. Be sure to have your test cases send bad POSTs to your application so that you are sure you handle both what you expect, and what you don't.
Also make sure that you do some integration testing with the actual service before you "go-live" with the application. Hopefully you can deploy to an external server that the web service will be able to access in order to test. Amazon's EC2 hosting environment would let you set up a server very quickly, run your tests, and then shut down without much cost at all.
You have 2 options:
Set up dynamic dns and expose your app to the outside world. This only works if you have full control over your network.
Use something like webrat to fake the posts to your app. Since it's only 1 request, this seems pretty trivial.
Considering that you should be writing automated tests for this, I'd go with #2. I used to do #1 when developing facebook apps since there was far to many requests to mock them all out with webrat.
If your question is about testing, why don't you use mocks to fake the server? It's more elegant than using Webrat, and easier to deploy (you only have one app instead of an app and a test environment).
More info about mocks http://blog.floehopper.org/presentations/lrug-mock-objects-2007-07-09/
You've got some info about mocks with Rspec here http://rspec.info/documentation/mocks/

Resources