Previously I used to load opal.js, opal-parser.js, opal-jquery.js(version 0.3x). Now I added gem opal and opal-rails(version 0.6.2) instead of js files.
Previously my whole test suit(capybara + rspec) used to passed but after update capybara specs start failing. Sometime it behave wired(i.e passed in firefox but fail in chrome).
One of the example error is
unknown error: Runtime.evaluate threw exception: RangeError: Maximum call stack size exceeded
It is said that there is compatibility issue with opal and rspec.
I will be grateful if someone help me fixing this issue.
I thinks its a known bug in the chrome webdriver:
https://code.google.com/p/chromedriver/issues/detail?id=683
Related
A specific test in my Rails app is failing on CircleCI.
This test expects an input to exist. The input/ form is rendered by React.
Screenshots from the failing test look like the React component is simply not rendering, perhaps a JS error. But I'm having difficulty identifying the cause.
In my local dev environment the form renders correctly.
In my local test environment the test passes.
If I make a remote browser connection to the CI build, the form renders correctly.
I've tried to inspect the CI headless chrome browser console logs, but recent Chrome updates seem to cause this error:
page.driver.browser.manage.logs.get(:browser)
Selenium::WebDriver::Error::WebDriverError:
unexpected response, code=404, content-type="text/plain"
unknown command: session/1595d6324fa6ae6bdc1ed885ba8c9ebf/se/log
Does anyone have a good idea how I can investigate this issue further? Is there an easier way to get hold of the browser console logs from this specific test?
Try something like this
config.after(:each, type: :system, js: true) do
errors = page.driver.browser.manage.logs.get(:browser)
if errors.present?
aggregate_failures 'javascript errrors' do
errors.each do |error|
expect(error.level).not_to eq('SEVERE'), error.message
next unless error.level == 'WARNING'
STDERR.puts 'WARN: javascript warning'
STDERR.puts error.message
end
end
end
end
This issue caused me many headaches for a long time. Turns out to be an easy fix, and a big part of the problem was that I'd made assumptions I should have checked.
Local I was running the latest versions of Chrome and ChromeDriver.
I thought I was running the same on my CI builds. Turns out the docker image I am using did not use the latest version of Chrome as I expected, even when manually configured to download ChromeDriver.
I updated the image to a more recent ruby version, and the problem went away.
Testing a Rails app with rspec & poltergeist, my tests have suddenly started raising
Facebook Pixel Error: ReferenceError: Can't find variable: Set
at http://connect.facebook.net/en_US/fbevents.js:24 in fc
There is a FB pixel embedded in the page, but I cannot figure out what is causing this error. I am unable to recreate it in a browser. I have been unable to track down the reference to Set variable in fbevents.js or anywhere else.
Has anyone experienced this, or knows how to resolve?
It's likely because the version of PhantomJS that you're using doesn't support JavaScript ES6.
I am in the process of upgrading a rails app that mostly serves JSON. The last working version I was able to upgrade to is 4.1. Once I upgraded to 4.2, request specs that produce strange errors in the test log:
Could not log "render_template.action_view" event. NoMethodError: undefined method `render_views?' for #<Class:0x007fe544a2b170>
Somewhere I read that this is due to rails trying to render a view that isn't present. Before the jump to rails 4, we set headers['CONTENT_TYPE'] = 'application/json' and everything was fine. I read that this isn't working anymore with rails 4. I already tried adding format: :json, as suggested here: Set Rspec default GET request format to JSON, which didn't help.
Any help on how to get the specs running again would be greatly appreciated.
As it turns out, this error occurs if an include is missing in the rspec config block. Adding
RSpec.configure.include RSpec::Rails::ViewRendering
fixes that issue.
I recently switched to capybara 2.5.0 & webkit 1.7.1.
I have a cucumber feature in which I want to check my javascript handling of a failed ajax request.
With previous capybara version:
I stubbed an API request to respond with 400 thus the controller action raised this exception (RestClient::BadRequest) without rescuing it.
My javascript was showing a custom message in case the ajax request failed.
The feature didn't fail when the exception was raised by the controller, the flow was continuing normally. The ajax request failed and my js was handling it as expected.
With the new capybara version: The feature fails when the exception takes place at the controller level.
I don't want the feature to stop at that level but to continue with the error response to the browser so I can handle the error with my js.
I would guess this behavior change isn't because of the capybara update, but rather because you moved the web_console gem out of the test group. That meant exceptions were never actually being raised in the server because web_console caught them all. Now that exceptions aren't being caught Capybara is displaying them. Capybara has the Capybara.raise_server_errors setting to enable/disable that behavior.
Capybara.raise_server_errors = false
I have an integration test that fails for a page that depends heavily on javacript. The same page runs just fine from the browser.
Simplifying the test to a bare minimum I found out that just testing for the presence of a selector that is added by javascript after the page load, would fail.
After precompiling the test assets and using save_and_open_page I found that the handler for the jQuery ready event is not running during the integration test.
I didn't find any references to this problem, so I guess I'm doing something wrong.
Can anyone help me figuring this out?
I'm using using rails 3.2.11, RSpec 2.13.0, Capybara 2.0.3 and capybara-webkit 0.14.2
By default Rails uses webdriver which doesn't handle JS. If you want JS support you'd need to use one of the JS-aware drivers, like Selenium (which runs full featured browser), capybara-webkit or poltergeist (which are headless webkit browsers). There are many others but these three are most popular.
I solved this problem for me by examining my AJAX request. The problem was that I was requesting a 'http://..." instead of '/' of my page. This meant that it raised a CORS ('ajax request from outside the domain') even though the request looked like it was coming from inside the domain.
My original symptom was 'javascript-inserted HTML elements are not on the page when testing using Capybara with Poltergeist or Selenium (but are on the page when running the application).'