vcr does not know how to handle this request - ruby-on-rails

Hi I am trying to test google auth with cucumber using vcr with a tag.
Everything goes fine till token expires. I think when it expires this happens
But I have a file with this content
http_interactions:
- request:
method: post
uri: https://accounts.google.com/o/oauth2/token
body:
If I allow vcr to record new requests the content of this cassette changes. I don't understand why, if the method and uri do not change POST to https://accounts.google.com/o/oauth2/token.
I changed tag to record new episodes and now test is passing... I am clueless.
I run the test again and now I am having this when POST to token url is being done:
Completed 500 Internal Server Error in 449ms
Psych::BadAlias (Unknown alias: 70317249293120):

Maybe you have some parameters inside the post, which are different for every request? If so, you can tell VCR to ignore this parameters by adding match_requests_on: [:method, VCR.request_matchers.uri_without_params("your_param")] to your VCR configuration.
In depth analyse your request, and find out which parameters are changing. You can tell VCR also to match on other criterias, have a look here https://www.relishapp.com/vcr/vcr/v/2-4-0/docs/request-matching

Ok, here's a solution...
The problem comes, as I said in the comment, from refreshing the token. When using oauth you have a token, which may be expired (or not). If you run the test and the token is fresh, that request isn't called. But if the token has expired it has to refresh it, and thus vcr throws an error.
To solve that, what I did is add the refresh token url to the ignored requests of vcr:
VCR.configure do |c|
c.cassette_library_dir = 'fixtures/vcr_cassettes'
c.hook_into :webmock # or :fakeweb
c.ignore_request {|request| request.uri == 'https://accounts.google.com/o/oauth2/token' }
end
It's not the best solution, since sometimes the token gets refreshed in the tests... but it's the best solution I could find...

I was getting the same issue with the same URL. For me, the problem was that my code was attempting to make the same call to https://accounts.google.com/o/oauth2/token more than once.
One of the potential solutions given in the VCR error message tells you the solution:
The cassette contains an HTTP interaction that matches this request, but it has already been played back. If you wish to allow a single HTTP interaction to be played back multiple times, set the :allow_playback_repeats cassette option
In my case, adding this option fixed the problem, as it tells VCR to revert back to its 1.x functionality of not re-recording duplicate requests, but simply playing back the result of a previously recorded duplicate request.
I am using Cucumber, so my solution was to add the following to my features/support/vcr.rb:
VCR.cucumber_tags do |t|
t.tag '#vcr', use_scenario_name: true
t.tag '#new_episodes', record: :new_episodes
t.tag '#allow_playback_repeats', use_scenario_name: true, allow_playback_repeats: true, record: :new_episodes
end
Notice the #allow_playback_repeats tag. I simply tagged my scenario with this tag, and everything worked properly thereafter:
#allow_playback_repeats
Scenario: Uploading a video initiates an upload to YouTube
Note that it doesn't work if you specify both #vcr and #allow_playback_repeats.
If you're using RSpec, you'll need to adapt the solution accordingly, but, it should be as simple as:
it "does something", :vcr => { allow_playback_repeats: true } do
...
end

I met the same problem, and finally found the there is a parameter changed every time.
you my solution is: copy and paste the mock parameter and real parameter together, and compare with them , and also make sure your next unit test would generate new parameter.

Related

Verify Shopify webhook

I believe that to have a Shopify webhook integrate with a Rails app, the Rails app needs to disable the default verify_authenticity_token method, and implement its own authentication using the X_SHOPIFY_HMAC_SHA256 header. The Shopify docs say to just use request.body.read. So, I did that:
def create
verify_webhook(request)
# Send back a 200 OK response
head :ok
end
def verify_webhook(request)
header_hmac = request.headers["HTTP_X_SHOPIFY_HMAC_SHA256"]
digest = OpenSSL::Digest.new("sha256")
request.body.rewind
calculated_hmac = Base64.encode64(OpenSSL::HMAC.digest(digest, SHARED_SECRET, request.body.read)).strip
puts "header hmac: #{header_hmac}"
puts "calculated hmac: #{calculated_hmac}"
puts "Verified:#{ActiveSupport::SecurityUtils.secure_compare(calculated_hmac, header_hmac)}"
end
The Shopify webhook is directed to the correct URL and the route gives it to the controller method shown above. But when I send a test notification, the output is not right. The two HMACs are not equal, and so it is not verified. I am fairly sure that the problem is that Shopify is using the entire request as their seed for the authentication hash, not just the POST contents. So, I need the original, untouched HTTP request, unless I am mistaken.
This question seemed like the only promising thing on the Internet after at least an hour of searching. It was exactly what I was asking and it had an accepted answer with 30 upvotes. But his answer... is absurd. It spits out an unintelligible, garbled mess of all kinds of things. Am I missing something glaring?
Furthermore, this article seemed to suggest that what I am looking for is not possible. It seems that Rails is never given the unadulterated request, but it is split into disparate parts by Rack, before it ever gets to Rails. If so, I guess I could maybe attempt to reassemble it, but I would have to even get the order of the headers correct for a hash to work, so I can't imagine that would be possible.
I guess my main question is, am I totally screwed?
The problem was in my SHARED_SECRET. I assumed this was the API secret key, because a few days ago it was called the shared secret in the Shopify admin page. But now I see a tiny paragraph at the bottom of the notifications page that says,
All your webhooks will be signed with ---MY_REAL_SHARED_SECRET--- so
you can verify their integrity.
This is the secret I need to use to verify the webhooks. Why there are two of them, I have no idea.
Have you tried doing it in the order they show in their guides? They have a working sample for ruby.
def create
request.body.rewind
data = request.body.read
header = request.headers["HTTP_X_SHOPIFY_HMAC_SHA256"]
verified = verify_webhook(data, header)
head :ok
end
They say in their guides:
Each Webhook request includes a X-Shopify-Hmac-SHA256 header which is
generated using the app's shared secret, along with the data sent in
the request.
the keywors being "generated using shared secret AND DATA sent in the request" so all of this should be available on your end, both the DATA and the shared secret.

Weird behavior on form data signature with a GET request

Case study
I have 2 apps (Rails => the sender and Laravel => the receiver) in which I sign data using a private and public key in order to ensure the accuracy of information between request. The data are sent from one app to another using GET parameters :
domain.com/callback?order_id=12&time=2015-10-01T22:38:20Z&signature=VX2WxlTaGK5N12GhZ5oqXU5h3wW/I70MYZhLbAYNQ79pFquuhdOerwBwqaq2BRuGyhKoY6VEHJkNnFjLAJkQD6Q5z4Vmk...
Problem
I'm experiencing an odd behavior between the staging server and the local one regarding the signature of those datas.
When testing on staging, the generate link (GET) looks like (source code from chrome) :
And on the local server, it's the exact same formatted html (except the data that change of course). By the way, I'm using HAML
The callback URL is generated from a decorator :
def url_to_store
params = url_params.to_a.map { |a| a.join('=') }.join('&')
signature = Shield::Crypto.new(params).signature
"#{object.referer}?#{params}&signature=#{signature}"
end
def url_params
{
order_id: object.id,
transaction_id: object.transaction_id,
user_id: object.user_id,
status: object.status,
time: Time.now.utc.iso8601,
reference: object.success? ? object.reference : ''
}
end
When clicking on the link from the staging, I get redirected to the other app which actually validate the signature. Every thing works.
However, the same thing does not apply to the locale server (my machine). When clicking on the link, the signature contains spaces (%20) :
signature=wP5EmeIGzXynwJc+BDV+jGVzyYhZOJuu7PzCXgnP2qbBfdqrAceEjxgh1EH2%20%20%20%20%20%20%20%20%20%20%20%20tvR66o3IA
Which of course make the other app reject the request as the signature is invalid. That's my issue. The exact same app. The exact same code base and version (a.k.a commit sha). Different behavior.
I don't know how to reproduce it. I was hoping some of you guys already experienced similar cases and could give me a hint.
Ideas?
NOTE: I'm using the exact same callback url (local PHP app) to test both the staging and the local server. I don't think the problem comes from the PHP app. Something related to a Rails debug stuff maybe?
The problem came from Haml and it's ugly mode. In development, it is set to false by default which causes the HTML code to be pad and somehow was messing with the signature.
The related github issues has been found here https://github.com/haml/haml/issues/636 and here https://github.com/haml/haml/issues/828
So to fix it, I created an initializer to enable it as default :
config/initializers/haml.rb
require 'haml/template'
Haml::Template.options[:ugly] = true

How can I test Stripe.js using poltergeist and Capybara?

I've been going nuts trying to write an automated test for my user sign up page. Users will be charged a recurring subscription via Stripe. They input their basic details (email, password, etc) and their credit card details on the same form, then the following flow happens:
(On the client-side) stripe.js makes an AJAX request to Stripe's servers, which (assuming everything is valid) returns a credit card token.
My javascript fills in a hidden input in the HTML form with the credit card token, and submits the form to my Rails server.
(Now on the server-side): I validate the user's basic details. If they're invalid, return (because there's no point charging them via Stripe if e.g. their email address is invalid so they can't create an account anyway.)
If they're valid, attempt to create a Stripe::Customer object, add the right subscription and charge them using Stripe's ruby gem etc.
All of this works perfectly fine... except I can't figure out how to test it. Testing step #4 is easy enough as it takes place on the server-side so I can mock out the Stripe calls with a gem like VCR.
Step #1 is what's giving me trouble. I've tried to test this using both puffing-billy and the stripe-ruby-mock gem, but nothing works. Here's my own javascript (simplified):
var stripeResponseHandler = function (status, response) {
console.log("response handler called");
if (response.error) {
// show the errors on the form
} else {
// insert the token into the form so it gets submitted to the server
$("#credit_card_token").val(response.id);
// Now submit the form.
$form.get(0).submit();
}
}
$form.submit(function (event) {
// Disable the submit button to prevent repeated clicks
$submitBtn.prop("disabled", true);
event.preventDefault();
console.log("creating token...");
Stripe.createToken(
// Get the credit card details from the form
// and input them here.
}, stripeResponseHandler);
// Prevent the form from submitting the normal way.
return false;
});
Just to reiterate, this all works fine when I test it manually. But my automated tests fail:
Failure/Error: expect{submit_form}.to change{User.count}.by(1)
expected result to have changed by 1, but was changed by 0
When I try to use the gem puffing-billy, it seems to be caching stripe.js itself (which is loaded from Stripe's own servers at js.stripe.com, not served from my own app, as Stripe don't support this.), but the call initiated by Stripe.createToken isn't being cached. In fact, when I log into my Stripe server logs, it doesn't seem that the call is even been made (or at least Stripe isn't receiving it.)
Note those console.log statements in my JS above. When I run my test suite, the line "creating token..." gets printed, but "response handler called." doesn't. Looks like the response handler is never being called.
I've left out some details because this question is already very long, but can add more on request. What am I doing wrong here? How can I test my sign up page?
UPDATE See [my comment on this Github issue] on stripe-ruby-mock for more info on what I've tried and failed.
If I understand correctly...
Capybara won't know about your ajax requests. You should be able to stub out AJAX requests with Sinatra. Have it return a fixtures much the same as VCR.
Here's an article on it.
https://robots.thoughtbot.com/using-capybara-to-test-javascript-that-makes-http
You need to boot the Sinatra app in Capybara and then match the URLs in your ajax calls.
Something like:
class FakeContinousIntegration < Sinatra::Base
def self.boot
instance = new
Capybara::Server.new(instance).tap { |server| server.boot }
end
get '/some/ajax'
# send ajax back to capybara
end
end
When you boot the server, it will return the address and port which you can write to a config that your js can use.
#server = App.boot
Then I use the address and port to config the JS app
def write_js_config
config['api'] = "http://#{#server.host}:#{#server.port}"
config.to_json
end
In spec_helper.rb send in the config to the js so your script points to your sinatra app. Mine compiles with gulp. So I just build the config into to is before the tests run:
system('gulp build --env capybara')
I've had tests which worked on manual fail in Capybara/poltergeist due to timeout. In my case, the solution was to wait for all AJAX requests to finish. Reference
Not sure whether Stripe.js uses JQuery internally, try checking for a condition set by stripeResponseHandler.
In addition to the wait_for_ajax trick mentioned, it looks like you are calling expect before your database was updated. One way to check that would be to add a breakpoint in your code(binding.pry), and check if it is a race condition issue or not.
Also, as per Capybara's documentation, introducing an expectation of a UI change makes it 'smartly' wait for ajax calls to finish:
expect(page).not_to have_content('Enter credit card details')

writing spec for method that hits a web service

I'm writing a spec to verify that my Video model will create a proper thumbnail for a vimeo video when it is created. It looks something like this:
it "creates thumbnail url" do
vimeo_url = "http://player.vimeo.com/video/12345"
vid = Factory.build(:video, video_url:vimeo_url)
# thumbnail created when saved
vid.save!
expect do
URI.parse(vid.thumbnail_url)
end.to_not raise_error
end
The problem is that my test is super slow because it has to hit vimeo.com. So I'm trying to stub the method that calls to the server. So two questions:
1) Is this the correct way/time to stub something
2) If yes, how do I stub it? In my Video model I have a method called get_vimeo_thumbnail() that hits vimeo.com. I want to stub that method. But if in my spec I do vid.stub(:get_vimeo_thumbnail).and_return("http://someurl.com") it doesn't work. When I run the test it still hits vimeo.com.
The VCR gem is probably worth considering. It hits the real Web service first time you run it and records the response so that it can be replayed next time you run the test (making subsequent tests fast).
I can't see anything wrong with the stub call you are making if you are calling stub before save!.
I also second the use of the 'vcr' gem.
There's also a (pro)-episode of Railscast available about VCR:
http://railscasts.com/episodes/291-testing-with-vcr
VCR can be used to record all outgoing webservice calls into "cassettes" (fixtures) that will be replayed when the tests are run again. So you get the initial set of "real-world" responses but will not hit the remote api anymore.
It also has options to do "on demand" requests when there is no recorded response available locally, and also to make explicit "live" requests.
You can, and should, run tests agains the live endpoint from time to time to verify.

My web site need to read a slow web site, how to improve the performance

I'm writing a web site with rails, which can let visitors inputing some domains and check if they had been regiestered.
When user clicked "Submit" button, my web site will try to post some data to another web site, and read the result back. But that website is slow for me, each request need 2 or 3 seconds. So I'm worried about the performance.
For example, if my web server allows 100 processes at most, that there are only 30 or 40 users can visit my website at the same time. This is not acceptable, is there any way to improve the performance?
PS:
At first, I want to use ajax reading that web site, but because of the "cross-domain" problem, it doesn't work. So I have to use this "ajax proxy" solution.
It's a bit more work, but you can use something like DelayedJob to process the requests to the other site in the background.
DelayedJob creates separate worker processes that look at a jobs table for stuff to do. When the user clicks submit, such a job is created, and starts running in one of those workers. This off-loads your Rails workers, and keeps your website snappy.
However, you will have to create some sort of polling mechanism in the browser while the job is running. Perhaps using a refresh or some simple AJAX. That way, the visitor could see a message such as “One moment, please...”, and after a while, the actual results.
Rather than posting some data to the websites, you could use an HTTP HEAD request, which (I believe) should return only the header information for that URL.
I found this code by googling around a bit:
require "net/http"
req = Net::HTTP.new('google.com', 80)
p req.request_head('/')
This will probably be faster than a POST request, and you won't have to wait to receive the entire contents of that resource. You should be able to determine whether the site is in use based on the response code.
Try using typhoeus rather than AJAX to get the body. You can POST the domain names for that site to check using typhoeus and can parse the response fetched. Its extremely fast compared to other solutions. A snippet that i ripped from the wiki page from the github repo http://github.com/pauldix/typhoeus shows that you can run requests in parallel (Which is probably what you want considering that it takes 1 to 2 seconds for an ajax request!!) :
hydra = Typhoeus::Hydra.new
first_request = Typhoeus::Request.new("http://localhost:3000/posts/1.json")
first_request.on_complete do |response|
post = JSON.parse(response.body)
third_request = Typhoeus::Request.new(post.links.first) # get the first url in the post
third_request.on_complete do |response|
# do something with that
end
hydra.queue third_request
return post
end
second_request = Typhoeus::Request.new("http://localhost:3000/users/1.json")
second_request.on_complete do |response|
JSON.parse(response.body)
end
hydra.queue first_request
hydra.queue second_request
hydra.run # this is a blocking call that returns once all requests are complete
first_request.handled_response # the value returned from the on_complete block
second_request.handled_response # the value returned from the on_complete block (parsed JSON)
Also Typhoeus + delayed_job = AWESOME!

Resources