How do poltergeist/PhantomJS and capybara-webkit differ? - capybara

What are the differences between PhantomJS and capybara-webkit?
What are the advantages of capybara-webkit over PhantomJS?
Which of the two is the most efficient tool?
Others ...

poltergeist is the capybara driver for PhantomJS, a headless browser which is built on WebKit. capybara-webkit is a capybara driver which uses WebKit directly.
poltergeist/PhantomJS has some big advantages over capybara-webkit:
In my experience poltergeist/PhantomJS always times out when it should, whereas capybara-webkit sometimes hangs.
Its error messages are much clearer. For example, it will actually tell you if it can't click an element that is on the page because there's another element in front of it.
It can be told to re-raise Javascript errors as test errors (by instantiating the driver with js_errors: true).
PhantomJS is much easier to install than standalone WebKit. PhantomJS provides a nearly dependency-free executable that you can download, while standalone WebKit has many OS library dependencies which you may have to upgrade or otherwise fiddle with.

TL;DR
Poltergeist/PhantomJS is easier to set up
Poltergeist/PhantomJS has less dependencies
Capybara-webkit is more stable and reliable and it’s better for CI
Long:
I have been using Poltergeist + PhantomJS for more than one year. My largest project has a lot of Ajax calls, file uploads, image manipulations, JS templates and pure CSS3 animations.
From time to time, Poltergeist and PhantomJS generated random errors.
Some of them were my mistakes. Testing Ajax is tricky. A common error was that at the end of the successful test the database_cleaner gem truncated the database, however, one Ajax call was still running and generated exception in the controller due the empty database. This isn’t always easy to fix unless you want to use sleep(). (I do not).
However, many errors with Poltergeist were not my mistakes. I have a test which does the same thing 30 times (for a good reason) and once in a while 1 of the 30 times it didn’t work. Poltergeist did not click on the button at all. It was a visible, not animated, normal button. I could fix it (by clicking on it again), however, that’s an ugly hack and feels wrong.
Sometimes the script that worked in all browsers generated random javascript errors with Poltergeist/PhantomJS. About 1 or 2 of 100 times.
With two different Ajax uploader plugin I have experienced that PhantomJS 1.9 and 2.0 behaves differently. 2.0 is more stable and consistent but it’s far from being perfect.
This was a huge pain with Jenkins. About every third run was a failure because 1 or 2 of the 400 features (js browser tests) generated random errors.
Two weeks ago I tried Capybara-webkit. It took me a couple of hours to migrate since they treat invisible elements differently. Capybara-webkit is more correct or strict in this. I noticed the same about overlapping elements.
Testing Ajax uploading and image manipulation requires custom scripts that I had to modify for Capybara-webkit.
I’m using Mac OS X for development, FreeBSD for production and Linux for Jenkins. Capybara-webkit was more complicated to set up than Poltergeist because it requires a screen and it has many dependencies. Only PhantomJS is truly headless and standalone. I could run PhantomJS on production servers if I wanted. I would not do that with capybara-webkit because of the dependencies.
Now I have 100% stable Jenkins CI. All the random javascript errors are the memories of the past. Capybara-webkit always clicks on the button I want it to click on. Javascript always works fine. Currently I have about 20-25 stable builds in a straight line.
For projects with a lot of Ajax, I recommend capybara-webkit.
My advice is based on the current, up to date versions in Aug, 2015.

capybara-webkit and PhantomJS both use Webkit under the hood to render web pages headlessly, i.e., without the need for a browser. They're different tools, however:
capybara-webkit serves as an adapter for Capybara, a Ruby gem that lets you write and perform high-level UI testing for a Rails or Rack app.
PhantomJS is a lower level tool that simply lets you run scripts against a web page. It can also be used to write UI tests as well (see Casper, for instance, or any of the other testing tools that build upon PhantomJS).

PhantomJS does not support HTML5 features like Audio/Video which really sucks.

Related

Performance issues after upgrading to Capybara Webkit 1.0

We are currently upgrading our Rails 3.2 (Ruby 2, Mongoid 3.1.5) App to Capybara Webkit 1.0.0 from 0.13.1. After the gem upgrade we fixed all new failing specs to comply with Capybara 2's new features and (default) settings. That went quite well. BUT: Our whole test suite is now significantly slower than before (~21 minutes compared to ~12 minutes).
Some tests take about 20 seconds. After lots of debugging we figgured out that the issue is not in those slow tests themselves (they run in 2 seconds as single test or in a selected group) but in the cumulation of several tests. We do run (and test) ajax calls in most of these feature tests. So the guess is that the webkit server gets blocked after some tests. But we didn't have that problem with the old capybara version.
I now, every test suite is quite individual so I don't ask for specifics. I'm happy with any idea which can lead to a solution.
Has anyone experienced (and solved ;-) similar problems? Maybe any ideas I didn't have yet?
Clue: Check for the number of files the webkit server opens and webkit processes during the test run
lsof |grep webkit

jQuery document.ready not called during rails integration test

I have an integration test that fails for a page that depends heavily on javacript. The same page runs just fine from the browser.
Simplifying the test to a bare minimum I found out that just testing for the presence of a selector that is added by javascript after the page load, would fail.
After precompiling the test assets and using save_and_open_page I found that the handler for the jQuery ready event is not running during the integration test.
I didn't find any references to this problem, so I guess I'm doing something wrong.
Can anyone help me figuring this out?
I'm using using rails 3.2.11, RSpec 2.13.0, Capybara 2.0.3 and capybara-webkit 0.14.2
By default Rails uses webdriver which doesn't handle JS. If you want JS support you'd need to use one of the JS-aware drivers, like Selenium (which runs full featured browser), capybara-webkit or poltergeist (which are headless webkit browsers). There are many others but these three are most popular.
I solved this problem for me by examining my AJAX request. The problem was that I was requesting a 'http://..." instead of '/' of my page. This meant that it raised a CORS ('ajax request from outside the domain') even though the request looked like it was coming from inside the domain.
My original symptom was 'javascript-inserted HTML elements are not on the page when testing using Capybara with Poltergeist or Selenium (but are on the page when running the application).'

Cucumber features are hanging on console

In my rails application, I have 2000 lines of code for cucumber features.
Now I am running all of the features at once using command rake rcov:features for getting coverage report.
I observed that while running all at once, they hang at some of the features and, because of this, are not generating the coverage report.
Please suggest, what are the possibilities of getting hanged?
I've seen this happen when the code depended on modernizer, and it was removed. I have also seen this happen when an incompatible/unbuildable server is specified in the gemfile (in this case, a broken build of thin on Windows). I have also seen machines with issues using selenium, and none using capybara-webkit, and vice-versa. Basically, there are about a million things that can go wrong, it seems to me that rails testing in general will benefit from additional polish and improved interaction. I wonder if you would have an easier time starting off small, instead of trying to find out exactly where in the 2000 lines it is all at once, perhaps it would be easier to remove all but a little bit of the code, and add it in slowly, until something fails. You could do the same thing, using your git repo, if this has worked in the past. Break it up into a smaller, simper, and more digestible project.

Is More (Less CSS plugin for Rails) still recommended?

The Less gem has been superseded by less.js, which runs on the server with Node.js. More, the "official" Less plugin for Rails, hasn't been updated since June 14, 2010.
In light of all that, what is the recommended way to use Less with Rails these days? I suppose I could always just use client-side JS for this, which everyone seems to be embracing. But I'm not crazy about relying on client-side JS just to transform a stylesheet, especially considering that I'd like to degrade gracefully. I realize that Less.js is considered very fast, but as a matter of principle, I don't want my CSS to be utterly dependent on the browser's JS engine.
Assuming I want to compile Less server-side, what is the best practice these days for use with Rails? I know you can run Less using Node.js, but I'm looking for nice Rails integration such as we once had with More.
I'm looking for something that will work on Linux and Mac. Ideally, it would be a gem or a Rails plugin, not a standalone app.
Update: I'm looking into whether The Ruby Racer can be used to embed Less.js into a Rails app. Does anyone have opinions on that?
Update 2: This question is really old, but for anyone who's still interested, I just wanted to point out that Rails 3 comes with SCSS integration out of the box. SCSS is a LESS competitor, and I'm quite happy with it.
Try out less.app for Mac OSX. http://incident57.com/less/
Use it on your local dev to generate the CSS.
My only complaint is that the parsing sometimes gets tripped up on IE specific CSS rules that are invalid CSS but were handled fine by the more gem.
Also it doesn't handle the less partials (_file.less) that more does.
Bryan here, developer of Less.app
You can handle ANY IE-specific code in Less just by using the escape function, which takes a string. Thus, you could write: e("filter:alpha..."); in your LESS file and it will compile to the expected IE-specific (though non-standard) CSS.
See Lesscss.org for more info. The bit about the e() function is all the way at the bottom of the docs.
May be not helpful at all, but the approach we currently use (PHP/node dev) is client-side .less during development, and delivering compiled .css on deploy, using lessc. That can be integrated into build scripts or commit hooks, no bundled magic :)

Is anyone actually running plugin tests/specs in their Rails applications?

We've recently upgraded our Rails application.
To be extra sure everything works, I've tried to get the tests and specs of the various used plugins (26 at current count) to work, thinking then to add those to our continuous integration, which only runs the main application's specs.
I've run into a lot of problems even getting the specs/tests to run at all, not even getting to any individual test failures. For example, I've run across this problem: http://rails_security.lighthouseapp.com/projects/15332/tickets/7-rake-spec-plugin-fails-on-rails-2-1 (thanks by the way for that ticket, even though the issue wasn't fixed).
So the question is: Are we unusual in that we've ever cared about running plugin tests ? It doesn't seem to feature much here on SO. My nagging feeling is that they should be run as much as the main specs, but you could also argue that since the main specs work, the plugins must also work.
Alot of it depends on the plugin/gem being used.
If I know the author/community of the gem is competant I will skip the tests and simply use the latest stable release and freeze that gem. I will then track the progress of the development using github.
If the plugin/gem is written by an unknown party I will run the tests and freeze the gem/plugin and again monitor the development.
Sometimes however I will write my own contributions to the gem and fork the code. I will clone the repo in github and base my installations from that. At which point any and all changes result in a complete test run.
With all things in the open source world there is an element of trust between the creator and the users of those pieces of code. The tests themselves don't tell me much about the codebase, it shows there are tests and thats it. Do they test everything ? Are there edge cases ? . Its this element of trust I have with certain developers in the community that means I forgo worrying over running tests for those gems.
Its a slippery slope testing everything, where does it stop ? Would you test rails every release ? No, you assume the community has done this for you already.

Resources