I'm writing a spec to verify that my Video model will create a proper thumbnail for a vimeo video when it is created. It looks something like this:
it "creates thumbnail url" do
vimeo_url = "http://player.vimeo.com/video/12345"
vid = Factory.build(:video, video_url:vimeo_url)
# thumbnail created when saved
vid.save!
expect do
URI.parse(vid.thumbnail_url)
end.to_not raise_error
end
The problem is that my test is super slow because it has to hit vimeo.com. So I'm trying to stub the method that calls to the server. So two questions:
1) Is this the correct way/time to stub something
2) If yes, how do I stub it? In my Video model I have a method called get_vimeo_thumbnail() that hits vimeo.com. I want to stub that method. But if in my spec I do vid.stub(:get_vimeo_thumbnail).and_return("http://someurl.com") it doesn't work. When I run the test it still hits vimeo.com.
The VCR gem is probably worth considering. It hits the real Web service first time you run it and records the response so that it can be replayed next time you run the test (making subsequent tests fast).
I can't see anything wrong with the stub call you are making if you are calling stub before save!.
I also second the use of the 'vcr' gem.
There's also a (pro)-episode of Railscast available about VCR:
http://railscasts.com/episodes/291-testing-with-vcr
VCR can be used to record all outgoing webservice calls into "cassettes" (fixtures) that will be replayed when the tests are run again. So you get the initial set of "real-world" responses but will not hit the remote api anymore.
It also has options to do "on demand" requests when there is no recorded response available locally, and also to make explicit "live" requests.
You can, and should, run tests agains the live endpoint from time to time to verify.
Related
I am using watir with headless browser. I would need to perform three steps add location, add vehicle and fetch product from the another site , for the information which I want from a another website.
I am submitting these three details from my server and performing these all three step in one HTTP request with the help of watir and headless.
I just want to breakdown one http request in to three http request on my server. The request will be:
1)add_location: Fire a http request which will open headless browser and select the location.
2)add_vehicle: Fire a http request which will reuse headless browser in which location added and we will select the vehicle.
3)Fetch product: Fire a http request which will reuse headless browser in which location and vehcile added, will fetch the product list.
I am not getting any way to reuse watir and headless session which is already open in the next http request at rails side.
Code Sample:
class TestsController < ApplicationController
def add_location
#headless = Headless.new
#headless.start
#watir = Watir::Browser.new
#watir.goto('www.google.com')
#watir.text_field(id: 'findstore-input')
.wait_until(&:present?).set(params[:zip_code])
#watir.a(id: 'findstore-button').click
#watir.div(class: 'notifier').wait_while(&:present?)
end
def add_vehicle
#need to resuse above #watir object in this action
end
end
The design change from 1 request to three has a big impact on your API, as even this simple part is now stateful, i.e. you need to keep the state between each of the three request.
Once you understand that, you have different possibilities.
Build your information request after request, and only when it is complete, use watir to get the information you need.
This is basically just changing the API and you store the data in a session, cookie, database or whatever.
It doesn't have a big impact on the changes you have to make, but does not bring any advantage.
Already forget this point, but you could pass around a global reference to your object in a session, but it has a HUGE memory impact and you could run into race condition.
NEVER do this, please
In case you really want to split the watir request into three different step (e.g. because it is too slow), you can use a background job to which you can transmit the user's data when it arrives (using dedicated databases, websocket, or whatever), then wait for your job to end (i.e. get a result), e.g. by trying to access it until it's available.
This solution requires a lot more work, but it keeps your HTTP requests with your client lightweight and allow you to do any kind of complex task in the background, which would otherwise probably timeout.
You can make use of the hooks file, to initiate the browser in headless mode and assign to the variable to call within separate def to pass url to the browser.
For example:
in hooks, you can add it as below
#browser = Watir::Browser.new :chrome, options: {args: ['--headless']}
So you can reuse the #browser.goto('www.google.com') in one def and can use the same instance some other call as well.
def example1:
#browser.goto('www.google.com')
end
def example2:
#browser.goto('www.facebook.com')
end
.
.
.
etc
Hope this helps.
I want to create a callback in my User model. after a user is created, a callback is initiated to run get_followers to get that users twitter followers (via full contact API).
This is all a bit new to me...
Is this the correct approach putting the request in a callback or should it be in the controller somewhere? And then how do I make the request to the endpoint in rails, and where should I be processing the data that is returned?
EDIT... Is something like this okay?
User.rb
require 'open-uri'
require 'json'
class Customer < ActiveRecord::Base
after_create :get_twitter
private
def get_twitter
source = "url-to-parse.com"
#data = JSON.parse(JSON.load(source))
end
A few things to consider:
The callback will run for every Customer that is created, not just those created in the controller. That may or may not be desirable, depending on your specific needs. For example, you will need to handle this in your tests by mocking out the external API call.
Errors could occur in the callback if the service is down, or if a bad response is returned. You have to decide how to handle those errors.
You should consider having the code in the callback run in a background process rather than in the web request, if it is not required to run immediately. That way errors in the callback will not produce a 500 page, and will improve performance since the response can be returned without waiting for the callback to complete. In such a case the rest of the application must be able to handle a user for whom the callback has not yet completed.
I am trying to setup unit tests for all of my HTTP requests. Every request requires authentication, and with my app it requires authentication via cookie & DB query.
I have a preDispatch method in a parent controller that looks like this:
$this->cookie = Cookie::readCookie();
if (is_null($this->cookie))
{
return $this->failResponseView();
}
$this->dm = $this->getServiceLocator()->get('doctrine.documentmanager.odm_default');
//Does not have authoriziation
if (!$this->hasAppAccess())
{
return $this->failResponseView();
}
This has been working fine as far as the app is concerned. But running phpunit fails everytime because the cookie can't be read, or the response is being written before it is read.
This is me mirroring what I do in the regular app, in my test setup method:
$this->_cookie = new Cookie(array('access_token' => $profile['token']));
$this->_cookie->setCookie();
However, I receive this Exception when the code reaches this point. My question is, how can I fake, or bypass my cookie authentication when running phpunit to make sure all of these authenticated requests work?
Cannot modify header information - headers already sent by (output started at D:
\www\app\vendor\phpunit\phpunit\PHPUnit\Util\Printer.php:172)
UPDATE
It looks like since the PHPUnit\Util\Printer is outputting to STDOUT (see above), it's not liking that I am trying to write a cookie. Running this allowed full execution
phpunit --stderr
So I am able to call the setCookie() method, and it executes fine. But when I get to the point where it does Cookie::readCookie(), even though it's already been set, it can't read it. It returns null.
So question is still pretty much the same. What do I do to test this app if it uses cookie authentication?
Ugh, it's always something simple. In my setup method, I can just do this...
$_COOKIE[$name]= $this->_cookie->getData();
Hi I am trying to test google auth with cucumber using vcr with a tag.
Everything goes fine till token expires. I think when it expires this happens
But I have a file with this content
http_interactions:
- request:
method: post
uri: https://accounts.google.com/o/oauth2/token
body:
If I allow vcr to record new requests the content of this cassette changes. I don't understand why, if the method and uri do not change POST to https://accounts.google.com/o/oauth2/token.
I changed tag to record new episodes and now test is passing... I am clueless.
I run the test again and now I am having this when POST to token url is being done:
Completed 500 Internal Server Error in 449ms
Psych::BadAlias (Unknown alias: 70317249293120):
Maybe you have some parameters inside the post, which are different for every request? If so, you can tell VCR to ignore this parameters by adding match_requests_on: [:method, VCR.request_matchers.uri_without_params("your_param")] to your VCR configuration.
In depth analyse your request, and find out which parameters are changing. You can tell VCR also to match on other criterias, have a look here https://www.relishapp.com/vcr/vcr/v/2-4-0/docs/request-matching
Ok, here's a solution...
The problem comes, as I said in the comment, from refreshing the token. When using oauth you have a token, which may be expired (or not). If you run the test and the token is fresh, that request isn't called. But if the token has expired it has to refresh it, and thus vcr throws an error.
To solve that, what I did is add the refresh token url to the ignored requests of vcr:
VCR.configure do |c|
c.cassette_library_dir = 'fixtures/vcr_cassettes'
c.hook_into :webmock # or :fakeweb
c.ignore_request {|request| request.uri == 'https://accounts.google.com/o/oauth2/token' }
end
It's not the best solution, since sometimes the token gets refreshed in the tests... but it's the best solution I could find...
I was getting the same issue with the same URL. For me, the problem was that my code was attempting to make the same call to https://accounts.google.com/o/oauth2/token more than once.
One of the potential solutions given in the VCR error message tells you the solution:
The cassette contains an HTTP interaction that matches this request, but it has already been played back. If you wish to allow a single HTTP interaction to be played back multiple times, set the :allow_playback_repeats cassette option
In my case, adding this option fixed the problem, as it tells VCR to revert back to its 1.x functionality of not re-recording duplicate requests, but simply playing back the result of a previously recorded duplicate request.
I am using Cucumber, so my solution was to add the following to my features/support/vcr.rb:
VCR.cucumber_tags do |t|
t.tag '#vcr', use_scenario_name: true
t.tag '#new_episodes', record: :new_episodes
t.tag '#allow_playback_repeats', use_scenario_name: true, allow_playback_repeats: true, record: :new_episodes
end
Notice the #allow_playback_repeats tag. I simply tagged my scenario with this tag, and everything worked properly thereafter:
#allow_playback_repeats
Scenario: Uploading a video initiates an upload to YouTube
Note that it doesn't work if you specify both #vcr and #allow_playback_repeats.
If you're using RSpec, you'll need to adapt the solution accordingly, but, it should be as simple as:
it "does something", :vcr => { allow_playback_repeats: true } do
...
end
I met the same problem, and finally found the there is a parameter changed every time.
you my solution is: copy and paste the mock parameter and real parameter together, and compare with them , and also make sure your next unit test would generate new parameter.
In my Rails project, I'm using VCR and RSpec to test HTTP interactions against an external REST web service that only allows calls to it once per second.
What this means so far is that I end up running my test suite until it fails due to a "number of calls exceeded" error from the web service. At that stage though, at least some cassettes get recorded, so I just continually run the test suite until eventually I get them all recorded and the suite can run using only cassettes (my default_cassette_options = { record: :new_episodes }). This doesn't seem like an optimal way to do things, especially if I find I need to re-record my cassettes in the future often, and I worry that constant calls could land me on a blacklist with the web service (there's no test server they have that I know about).
So, I ended up trying putting calls to sleep(1) in my Rspec it blocks directly before the call to the web service is made, and then refactored those calls up into the VCR configuration:
spec/support/vcr.rb
VCR.configure do |c|
# ...
c.after_http_request do |request, response|
sleep(1)
end
end
Although this seems to work fine, is there a better way to do this? At the moment, if a call to an external service that doesn't have a cassette already is the final test in the suite, then the suite sleeps unnecessarily for 1 second. Likewise, if the time between 2 web service calls without cassettes in the test suite is more than once second, then there's another unnecessary pause. Has anyone made any kind of logic to test for these kinds of conditions, or is there a way to elegantly do this in the VCR configuration?
First off, I would recommend against using :new_episodes as your record mode. It has it's uses, but the default (:once) is generally what you want. For accuracy, you want to record a cassette as a sequence of HTTP requests that were made in a single pass. With :new_episodes, you can wind up with cassettes that contain HTTP interactions that were recorded months apart but are now being played back together, and the real HTTP server may not respond in that same fashion.
Secondly, I'd encourage you to listen to the pain exposed by your tests, and find ways to decouple most of your test suite from these HTTP requests. Can you find a way to make it so that just the tests focused on the client, and the end-to-end acceptance tests make the requests? If you wrap the HTTP stuff in a simple interface, it should be easy to substitute a test double for all the other tests, and more easily control your inputs.
That's a longer term fix, though. In the short term, you can tweak your VCR config like so:
VCR.configure do |vcr|
allow_next_request_at = nil
filters = [:real?, lambda { |r| URI(r.uri).host == 'my-throttled-api.com' }]
vcr.after_http_request(*filters) do |request, response|
allow_next_request_at = Time.now + 1
end
vcr.before_http_request(*filters) do |request|
if allow_next_request_at && Time.now < allow_next_request_at
sleep(allow_next_request_at - Time.now)
end
end
end
This uses hook filters (as documented) to run the hooks only on real requests to the API host. allow_next_request_at is used to sleep the minimum amount of time necessary.
An alternative may be to use APICache as a proxy around your HTTP library, as it will handle rate limiting on your behalf.
APICache.get("my_albums", period => 1) do
FlickrRb.get_all_sets
end
This will raise APICache::CannotFetch when you attempt to call the API more often than your limit.
Here's a link to the APICache Github repo