I am trying to setup unit tests for all of my HTTP requests. Every request requires authentication, and with my app it requires authentication via cookie & DB query.
I have a preDispatch method in a parent controller that looks like this:
$this->cookie = Cookie::readCookie();
if (is_null($this->cookie))
{
return $this->failResponseView();
}
$this->dm = $this->getServiceLocator()->get('doctrine.documentmanager.odm_default');
//Does not have authoriziation
if (!$this->hasAppAccess())
{
return $this->failResponseView();
}
This has been working fine as far as the app is concerned. But running phpunit fails everytime because the cookie can't be read, or the response is being written before it is read.
This is me mirroring what I do in the regular app, in my test setup method:
$this->_cookie = new Cookie(array('access_token' => $profile['token']));
$this->_cookie->setCookie();
However, I receive this Exception when the code reaches this point. My question is, how can I fake, or bypass my cookie authentication when running phpunit to make sure all of these authenticated requests work?
Cannot modify header information - headers already sent by (output started at D:
\www\app\vendor\phpunit\phpunit\PHPUnit\Util\Printer.php:172)
UPDATE
It looks like since the PHPUnit\Util\Printer is outputting to STDOUT (see above), it's not liking that I am trying to write a cookie. Running this allowed full execution
phpunit --stderr
So I am able to call the setCookie() method, and it executes fine. But when I get to the point where it does Cookie::readCookie(), even though it's already been set, it can't read it. It returns null.
So question is still pretty much the same. What do I do to test this app if it uses cookie authentication?
Ugh, it's always something simple. In my setup method, I can just do this...
$_COOKIE[$name]= $this->_cookie->getData();
Related
I am using watir with headless browser. I would need to perform three steps add location, add vehicle and fetch product from the another site , for the information which I want from a another website.
I am submitting these three details from my server and performing these all three step in one HTTP request with the help of watir and headless.
I just want to breakdown one http request in to three http request on my server. The request will be:
1)add_location: Fire a http request which will open headless browser and select the location.
2)add_vehicle: Fire a http request which will reuse headless browser in which location added and we will select the vehicle.
3)Fetch product: Fire a http request which will reuse headless browser in which location and vehcile added, will fetch the product list.
I am not getting any way to reuse watir and headless session which is already open in the next http request at rails side.
Code Sample:
class TestsController < ApplicationController
def add_location
#headless = Headless.new
#headless.start
#watir = Watir::Browser.new
#watir.goto('www.google.com')
#watir.text_field(id: 'findstore-input')
.wait_until(&:present?).set(params[:zip_code])
#watir.a(id: 'findstore-button').click
#watir.div(class: 'notifier').wait_while(&:present?)
end
def add_vehicle
#need to resuse above #watir object in this action
end
end
The design change from 1 request to three has a big impact on your API, as even this simple part is now stateful, i.e. you need to keep the state between each of the three request.
Once you understand that, you have different possibilities.
Build your information request after request, and only when it is complete, use watir to get the information you need.
This is basically just changing the API and you store the data in a session, cookie, database or whatever.
It doesn't have a big impact on the changes you have to make, but does not bring any advantage.
Already forget this point, but you could pass around a global reference to your object in a session, but it has a HUGE memory impact and you could run into race condition.
NEVER do this, please
In case you really want to split the watir request into three different step (e.g. because it is too slow), you can use a background job to which you can transmit the user's data when it arrives (using dedicated databases, websocket, or whatever), then wait for your job to end (i.e. get a result), e.g. by trying to access it until it's available.
This solution requires a lot more work, but it keeps your HTTP requests with your client lightweight and allow you to do any kind of complex task in the background, which would otherwise probably timeout.
You can make use of the hooks file, to initiate the browser in headless mode and assign to the variable to call within separate def to pass url to the browser.
For example:
in hooks, you can add it as below
#browser = Watir::Browser.new :chrome, options: {args: ['--headless']}
So you can reuse the #browser.goto('www.google.com') in one def and can use the same instance some other call as well.
def example1:
#browser.goto('www.google.com')
end
def example2:
#browser.goto('www.facebook.com')
end
.
.
.
etc
Hope this helps.
Case study
I have 2 apps (Rails => the sender and Laravel => the receiver) in which I sign data using a private and public key in order to ensure the accuracy of information between request. The data are sent from one app to another using GET parameters :
domain.com/callback?order_id=12&time=2015-10-01T22:38:20Z&signature=VX2WxlTaGK5N12GhZ5oqXU5h3wW/I70MYZhLbAYNQ79pFquuhdOerwBwqaq2BRuGyhKoY6VEHJkNnFjLAJkQD6Q5z4Vmk...
Problem
I'm experiencing an odd behavior between the staging server and the local one regarding the signature of those datas.
When testing on staging, the generate link (GET) looks like (source code from chrome) :
And on the local server, it's the exact same formatted html (except the data that change of course). By the way, I'm using HAML
The callback URL is generated from a decorator :
def url_to_store
params = url_params.to_a.map { |a| a.join('=') }.join('&')
signature = Shield::Crypto.new(params).signature
"#{object.referer}?#{params}&signature=#{signature}"
end
def url_params
{
order_id: object.id,
transaction_id: object.transaction_id,
user_id: object.user_id,
status: object.status,
time: Time.now.utc.iso8601,
reference: object.success? ? object.reference : ''
}
end
When clicking on the link from the staging, I get redirected to the other app which actually validate the signature. Every thing works.
However, the same thing does not apply to the locale server (my machine). When clicking on the link, the signature contains spaces (%20) :
signature=wP5EmeIGzXynwJc+BDV+jGVzyYhZOJuu7PzCXgnP2qbBfdqrAceEjxgh1EH2%20%20%20%20%20%20%20%20%20%20%20%20tvR66o3IA
Which of course make the other app reject the request as the signature is invalid. That's my issue. The exact same app. The exact same code base and version (a.k.a commit sha). Different behavior.
I don't know how to reproduce it. I was hoping some of you guys already experienced similar cases and could give me a hint.
Ideas?
NOTE: I'm using the exact same callback url (local PHP app) to test both the staging and the local server. I don't think the problem comes from the PHP app. Something related to a Rails debug stuff maybe?
The problem came from Haml and it's ugly mode. In development, it is set to false by default which causes the HTML code to be pad and somehow was messing with the signature.
The related github issues has been found here https://github.com/haml/haml/issues/636 and here https://github.com/haml/haml/issues/828
So to fix it, I created an initializer to enable it as default :
config/initializers/haml.rb
require 'haml/template'
Haml::Template.options[:ugly] = true
I've been going nuts trying to write an automated test for my user sign up page. Users will be charged a recurring subscription via Stripe. They input their basic details (email, password, etc) and their credit card details on the same form, then the following flow happens:
(On the client-side) stripe.js makes an AJAX request to Stripe's servers, which (assuming everything is valid) returns a credit card token.
My javascript fills in a hidden input in the HTML form with the credit card token, and submits the form to my Rails server.
(Now on the server-side): I validate the user's basic details. If they're invalid, return (because there's no point charging them via Stripe if e.g. their email address is invalid so they can't create an account anyway.)
If they're valid, attempt to create a Stripe::Customer object, add the right subscription and charge them using Stripe's ruby gem etc.
All of this works perfectly fine... except I can't figure out how to test it. Testing step #4 is easy enough as it takes place on the server-side so I can mock out the Stripe calls with a gem like VCR.
Step #1 is what's giving me trouble. I've tried to test this using both puffing-billy and the stripe-ruby-mock gem, but nothing works. Here's my own javascript (simplified):
var stripeResponseHandler = function (status, response) {
console.log("response handler called");
if (response.error) {
// show the errors on the form
} else {
// insert the token into the form so it gets submitted to the server
$("#credit_card_token").val(response.id);
// Now submit the form.
$form.get(0).submit();
}
}
$form.submit(function (event) {
// Disable the submit button to prevent repeated clicks
$submitBtn.prop("disabled", true);
event.preventDefault();
console.log("creating token...");
Stripe.createToken(
// Get the credit card details from the form
// and input them here.
}, stripeResponseHandler);
// Prevent the form from submitting the normal way.
return false;
});
Just to reiterate, this all works fine when I test it manually. But my automated tests fail:
Failure/Error: expect{submit_form}.to change{User.count}.by(1)
expected result to have changed by 1, but was changed by 0
When I try to use the gem puffing-billy, it seems to be caching stripe.js itself (which is loaded from Stripe's own servers at js.stripe.com, not served from my own app, as Stripe don't support this.), but the call initiated by Stripe.createToken isn't being cached. In fact, when I log into my Stripe server logs, it doesn't seem that the call is even been made (or at least Stripe isn't receiving it.)
Note those console.log statements in my JS above. When I run my test suite, the line "creating token..." gets printed, but "response handler called." doesn't. Looks like the response handler is never being called.
I've left out some details because this question is already very long, but can add more on request. What am I doing wrong here? How can I test my sign up page?
UPDATE See [my comment on this Github issue] on stripe-ruby-mock for more info on what I've tried and failed.
If I understand correctly...
Capybara won't know about your ajax requests. You should be able to stub out AJAX requests with Sinatra. Have it return a fixtures much the same as VCR.
Here's an article on it.
https://robots.thoughtbot.com/using-capybara-to-test-javascript-that-makes-http
You need to boot the Sinatra app in Capybara and then match the URLs in your ajax calls.
Something like:
class FakeContinousIntegration < Sinatra::Base
def self.boot
instance = new
Capybara::Server.new(instance).tap { |server| server.boot }
end
get '/some/ajax'
# send ajax back to capybara
end
end
When you boot the server, it will return the address and port which you can write to a config that your js can use.
#server = App.boot
Then I use the address and port to config the JS app
def write_js_config
config['api'] = "http://#{#server.host}:#{#server.port}"
config.to_json
end
In spec_helper.rb send in the config to the js so your script points to your sinatra app. Mine compiles with gulp. So I just build the config into to is before the tests run:
system('gulp build --env capybara')
I've had tests which worked on manual fail in Capybara/poltergeist due to timeout. In my case, the solution was to wait for all AJAX requests to finish. Reference
Not sure whether Stripe.js uses JQuery internally, try checking for a condition set by stripeResponseHandler.
In addition to the wait_for_ajax trick mentioned, it looks like you are calling expect before your database was updated. One way to check that would be to add a breakpoint in your code(binding.pry), and check if it is a race condition issue or not.
Also, as per Capybara's documentation, introducing an expectation of a UI change makes it 'smartly' wait for ajax calls to finish:
expect(page).not_to have_content('Enter credit card details')
Hi I am trying to test google auth with cucumber using vcr with a tag.
Everything goes fine till token expires. I think when it expires this happens
But I have a file with this content
http_interactions:
- request:
method: post
uri: https://accounts.google.com/o/oauth2/token
body:
If I allow vcr to record new requests the content of this cassette changes. I don't understand why, if the method and uri do not change POST to https://accounts.google.com/o/oauth2/token.
I changed tag to record new episodes and now test is passing... I am clueless.
I run the test again and now I am having this when POST to token url is being done:
Completed 500 Internal Server Error in 449ms
Psych::BadAlias (Unknown alias: 70317249293120):
Maybe you have some parameters inside the post, which are different for every request? If so, you can tell VCR to ignore this parameters by adding match_requests_on: [:method, VCR.request_matchers.uri_without_params("your_param")] to your VCR configuration.
In depth analyse your request, and find out which parameters are changing. You can tell VCR also to match on other criterias, have a look here https://www.relishapp.com/vcr/vcr/v/2-4-0/docs/request-matching
Ok, here's a solution...
The problem comes, as I said in the comment, from refreshing the token. When using oauth you have a token, which may be expired (or not). If you run the test and the token is fresh, that request isn't called. But if the token has expired it has to refresh it, and thus vcr throws an error.
To solve that, what I did is add the refresh token url to the ignored requests of vcr:
VCR.configure do |c|
c.cassette_library_dir = 'fixtures/vcr_cassettes'
c.hook_into :webmock # or :fakeweb
c.ignore_request {|request| request.uri == 'https://accounts.google.com/o/oauth2/token' }
end
It's not the best solution, since sometimes the token gets refreshed in the tests... but it's the best solution I could find...
I was getting the same issue with the same URL. For me, the problem was that my code was attempting to make the same call to https://accounts.google.com/o/oauth2/token more than once.
One of the potential solutions given in the VCR error message tells you the solution:
The cassette contains an HTTP interaction that matches this request, but it has already been played back. If you wish to allow a single HTTP interaction to be played back multiple times, set the :allow_playback_repeats cassette option
In my case, adding this option fixed the problem, as it tells VCR to revert back to its 1.x functionality of not re-recording duplicate requests, but simply playing back the result of a previously recorded duplicate request.
I am using Cucumber, so my solution was to add the following to my features/support/vcr.rb:
VCR.cucumber_tags do |t|
t.tag '#vcr', use_scenario_name: true
t.tag '#new_episodes', record: :new_episodes
t.tag '#allow_playback_repeats', use_scenario_name: true, allow_playback_repeats: true, record: :new_episodes
end
Notice the #allow_playback_repeats tag. I simply tagged my scenario with this tag, and everything worked properly thereafter:
#allow_playback_repeats
Scenario: Uploading a video initiates an upload to YouTube
Note that it doesn't work if you specify both #vcr and #allow_playback_repeats.
If you're using RSpec, you'll need to adapt the solution accordingly, but, it should be as simple as:
it "does something", :vcr => { allow_playback_repeats: true } do
...
end
I met the same problem, and finally found the there is a parameter changed every time.
you my solution is: copy and paste the mock parameter and real parameter together, and compare with them , and also make sure your next unit test would generate new parameter.
I'm writing a spec to verify that my Video model will create a proper thumbnail for a vimeo video when it is created. It looks something like this:
it "creates thumbnail url" do
vimeo_url = "http://player.vimeo.com/video/12345"
vid = Factory.build(:video, video_url:vimeo_url)
# thumbnail created when saved
vid.save!
expect do
URI.parse(vid.thumbnail_url)
end.to_not raise_error
end
The problem is that my test is super slow because it has to hit vimeo.com. So I'm trying to stub the method that calls to the server. So two questions:
1) Is this the correct way/time to stub something
2) If yes, how do I stub it? In my Video model I have a method called get_vimeo_thumbnail() that hits vimeo.com. I want to stub that method. But if in my spec I do vid.stub(:get_vimeo_thumbnail).and_return("http://someurl.com") it doesn't work. When I run the test it still hits vimeo.com.
The VCR gem is probably worth considering. It hits the real Web service first time you run it and records the response so that it can be replayed next time you run the test (making subsequent tests fast).
I can't see anything wrong with the stub call you are making if you are calling stub before save!.
I also second the use of the 'vcr' gem.
There's also a (pro)-episode of Railscast available about VCR:
http://railscasts.com/episodes/291-testing-with-vcr
VCR can be used to record all outgoing webservice calls into "cassettes" (fixtures) that will be replayed when the tests are run again. So you get the initial set of "real-world" responses but will not hit the remote api anymore.
It also has options to do "on demand" requests when there is no recorded response available locally, and also to make explicit "live" requests.
You can, and should, run tests agains the live endpoint from time to time to verify.