Performance issues after upgrading to Capybara Webkit 1.0 - ruby-on-rails

We are currently upgrading our Rails 3.2 (Ruby 2, Mongoid 3.1.5) App to Capybara Webkit 1.0.0 from 0.13.1. After the gem upgrade we fixed all new failing specs to comply with Capybara 2's new features and (default) settings. That went quite well. BUT: Our whole test suite is now significantly slower than before (~21 minutes compared to ~12 minutes).
Some tests take about 20 seconds. After lots of debugging we figgured out that the issue is not in those slow tests themselves (they run in 2 seconds as single test or in a selected group) but in the cumulation of several tests. We do run (and test) ajax calls in most of these feature tests. So the guess is that the webkit server gets blocked after some tests. But we didn't have that problem with the old capybara version.
I now, every test suite is quite individual so I don't ask for specifics. I'm happy with any idea which can lead to a solution.
Has anyone experienced (and solved ;-) similar problems? Maybe any ideas I didn't have yet?

Clue: Check for the number of files the webkit server opens and webkit processes during the test run
lsof |grep webkit

Related

Big performance reduction after Rails 5 upgrade

We have completed upgrading our app from Rails 4.2 to 5.2. When we run load testing on the 5.2 version it can only handle half the load of the 4.2 version. In looking at NewRelic stats during the load tests it seems to be slower everywhere - pretty much every request, ActiveRecord calls, redis calls, ruby, etc. We have confirmed it is not related to other upgrades that happened in addition - ruby upgrade, upgrading pg gem, or upgrading puma. While researching, the only performance issues I have found related to the upgrade have been fixed.
Has anyone run into something similar or have pointers on where to look?
What we have tried so far:
1. Check non-rails related upgrades that happened at the same time:
- Upgrade 4.2 branch to same version of ruby to see if that has any impact (no impact)
- Downgrade puma and pg gems in Rails 5 branch (no impact)
2. Examine performance traces for slower transactions and DB queries. Remove the slowest interactions from the load test to see if the overall slowest continues (it does).
3. Test if slowness appears in Rails 5.1 (it does).
What we are planning to try:
1. Test if slowness appears in Rails 5.0.
2. See if slowest can be detected in single user use rather than load test.
3. Use https://github.com/tleish/ruby-prof-rails to see if we can get more statistics to examine.
4. Downgrade all gems except the ones we absolutely need for the Rails 5 upgrade and see if problem still exists.
This ended up being a combination of Rack::Timeout, Heroku and Puma. Under heavy load, we would sometimes hit our Rake::Timeout value of 28 seconds. For some reason after the upgrade, the 2 seconds between the Rake::Timeout value and Heroku's 30 second router timeout (H12) was not enough. As a result, processes were getting killed by Heroku before Rake::Timeou which was causing a cascade effect in Puma that made a ton of other requests on the same server also timeout. We fixed it by adjusting our Rake::Timeout value to 25 seconds and everything worked.

How do poltergeist/PhantomJS and capybara-webkit differ?

What are the differences between PhantomJS and capybara-webkit?
What are the advantages of capybara-webkit over PhantomJS?
Which of the two is the most efficient tool?
Others ...
poltergeist is the capybara driver for PhantomJS, a headless browser which is built on WebKit. capybara-webkit is a capybara driver which uses WebKit directly.
poltergeist/PhantomJS has some big advantages over capybara-webkit:
In my experience poltergeist/PhantomJS always times out when it should, whereas capybara-webkit sometimes hangs.
Its error messages are much clearer. For example, it will actually tell you if it can't click an element that is on the page because there's another element in front of it.
It can be told to re-raise Javascript errors as test errors (by instantiating the driver with js_errors: true).
PhantomJS is much easier to install than standalone WebKit. PhantomJS provides a nearly dependency-free executable that you can download, while standalone WebKit has many OS library dependencies which you may have to upgrade or otherwise fiddle with.
TL;DR
Poltergeist/PhantomJS is easier to set up
Poltergeist/PhantomJS has less dependencies
Capybara-webkit is more stable and reliable and it’s better for CI
Long:
I have been using Poltergeist + PhantomJS for more than one year. My largest project has a lot of Ajax calls, file uploads, image manipulations, JS templates and pure CSS3 animations.
From time to time, Poltergeist and PhantomJS generated random errors.
Some of them were my mistakes. Testing Ajax is tricky. A common error was that at the end of the successful test the database_cleaner gem truncated the database, however, one Ajax call was still running and generated exception in the controller due the empty database. This isn’t always easy to fix unless you want to use sleep(). (I do not).
However, many errors with Poltergeist were not my mistakes. I have a test which does the same thing 30 times (for a good reason) and once in a while 1 of the 30 times it didn’t work. Poltergeist did not click on the button at all. It was a visible, not animated, normal button. I could fix it (by clicking on it again), however, that’s an ugly hack and feels wrong.
Sometimes the script that worked in all browsers generated random javascript errors with Poltergeist/PhantomJS. About 1 or 2 of 100 times.
With two different Ajax uploader plugin I have experienced that PhantomJS 1.9 and 2.0 behaves differently. 2.0 is more stable and consistent but it’s far from being perfect.
This was a huge pain with Jenkins. About every third run was a failure because 1 or 2 of the 400 features (js browser tests) generated random errors.
Two weeks ago I tried Capybara-webkit. It took me a couple of hours to migrate since they treat invisible elements differently. Capybara-webkit is more correct or strict in this. I noticed the same about overlapping elements.
Testing Ajax uploading and image manipulation requires custom scripts that I had to modify for Capybara-webkit.
I’m using Mac OS X for development, FreeBSD for production and Linux for Jenkins. Capybara-webkit was more complicated to set up than Poltergeist because it requires a screen and it has many dependencies. Only PhantomJS is truly headless and standalone. I could run PhantomJS on production servers if I wanted. I would not do that with capybara-webkit because of the dependencies.
Now I have 100% stable Jenkins CI. All the random javascript errors are the memories of the past. Capybara-webkit always clicks on the button I want it to click on. Javascript always works fine. Currently I have about 20-25 stable builds in a straight line.
For projects with a lot of Ajax, I recommend capybara-webkit.
My advice is based on the current, up to date versions in Aug, 2015.
capybara-webkit and PhantomJS both use Webkit under the hood to render web pages headlessly, i.e., without the need for a browser. They're different tools, however:
capybara-webkit serves as an adapter for Capybara, a Ruby gem that lets you write and perform high-level UI testing for a Rails or Rack app.
PhantomJS is a lower level tool that simply lets you run scripts against a web page. It can also be used to write UI tests as well (see Casper, for instance, or any of the other testing tools that build upon PhantomJS).
PhantomJS does not support HTML5 features like Audio/Video which really sucks.

Why are my cucumber scenarios failing when steps are run together, but pass when run singularly?

When I run my cucumber scenarios as a whole, or with the command: cucumber
I get 7 failing steps. When I run them individually with the work in progress tag they pass fine.
I don't think it's a database state issue.. I'm running with transactions and I also tried running without and cleaning the database with database cleaner.... still does not help.
I tried to run the debugger but it does not seem to work when I run the command cucumber. It only works when I run with the work in progress tag: cucumber -p wip
I thought it might be that things are running too fast and capybara is not checking things properly?
Any ideas?
Eureka! I've been having this same problem for awhile now - my tests got slower and slower the more I added - also, some tests would fail randomly, but only when run as a whole suite - after my tests would finish I would just run the feature again and viola! all passing. Very frustrating - but the MOST frustrating part was the speed - recently I upgraded to snow leopard and compiled everything to 64bits. The result? My tests went from taking 7 minutes to 32!
There's a clue in that however - 64 bit apps use more memory to do do the same thing, apparently - however, when I was running my tests the memory on my machine was never coming close to maxing out. Hint #2? Webrat was going fast, it was only when using culerity/celerity to test javascript that things were really slowing down.
After poking around I found out that jruby tells java to give it a maximum 'heap size' of 512 mbs. JRuby allows you to set java options when it is invoked, and culerity allows an environment variable to invoke jruby any way you like. Sure enough, around that time, java would stop consuming memory and the processor would try to set itself on fire. So are you ready? Here it is:
JRUBY_INVOCATION="jruby -J-Xmx1024m" cucumber
That increased my heap size to a gigabyte and my test time dropped down to 7 minutes! Is that it? Did I get it? I sure hope it helps!

Is anyone actually running plugin tests/specs in their Rails applications?

We've recently upgraded our Rails application.
To be extra sure everything works, I've tried to get the tests and specs of the various used plugins (26 at current count) to work, thinking then to add those to our continuous integration, which only runs the main application's specs.
I've run into a lot of problems even getting the specs/tests to run at all, not even getting to any individual test failures. For example, I've run across this problem: http://rails_security.lighthouseapp.com/projects/15332/tickets/7-rake-spec-plugin-fails-on-rails-2-1 (thanks by the way for that ticket, even though the issue wasn't fixed).
So the question is: Are we unusual in that we've ever cared about running plugin tests ? It doesn't seem to feature much here on SO. My nagging feeling is that they should be run as much as the main specs, but you could also argue that since the main specs work, the plugins must also work.
Alot of it depends on the plugin/gem being used.
If I know the author/community of the gem is competant I will skip the tests and simply use the latest stable release and freeze that gem. I will then track the progress of the development using github.
If the plugin/gem is written by an unknown party I will run the tests and freeze the gem/plugin and again monitor the development.
Sometimes however I will write my own contributions to the gem and fork the code. I will clone the repo in github and base my installations from that. At which point any and all changes result in a complete test run.
With all things in the open source world there is an element of trust between the creator and the users of those pieces of code. The tests themselves don't tell me much about the codebase, it shows there are tests and thats it. Do they test everything ? Are there edge cases ? . Its this element of trust I have with certain developers in the community that means I forgo worrying over running tests for those gems.
Its a slippery slope testing everything, where does it stop ? Would you test rails every release ? No, you assume the community has done this for you already.

Help w/ Sluggish "rake cucumber"

I've been trying to debug some super slow performance in running my cucumber features. I've run various calls through ruby-prof and think I see the bottlenecks (not too familiar with using ruby-prof) but do not know the cause or more important the solution. I've include below the output from running rake cucumber.
http://dl.dropbox.com/u/1788885/rake_cucumber.txt
Does anyone have any idea why this is happening or how I could go about debugging it further?
Thanks,
Eric
So, I happen to have been playing around with this all morning. It turns out that if you do:
rake cucumber
that does indeed take forever to run. (About 20 secs on my laptop.) But, apparently:
cucumber
runs just fine w/out the overhead of Rake and it runs in about 8 secs.
I'm not sure of what might be making cucumber run slowly for you. As a possible work around, you could considering using spork. On my Windows 7 netbook, running just one cucumber test went from around 7 minutes to 10 seconds with spork.

Resources