optimize capybara times - ruby-on-rails

I have test suite for acceptance tests in my rails app that uses pure capybara (no cucumber).
It has 220 examples and it takes 21 minutes to finish. My non-js driver is rack_test and my js_driver is capybara-webkit instead of selenium.
I would like to improve test times, but i have no idea if there is a common bottle-neck in this kind of testing.
Some ideas i have/had:
Change capybara server. It was using mongrel as a fallback. The default is thin. I installed thin but i didn't get any speed improvement. Seem like thins advantage is concurrency, an tests dont have it.
Since I am cleaning the database between tests, before each example of a private part of my app (MOST of the examples are like this) I need to login. That mean it loggin the app 200 times. There is a way to mantain session between examples in order to avoid loggin again and again?

there are two things that come to my mind:
parallel_tests can improve your test-speed if you run multicore https://github.com/grosser/parallel_tests
providing a backdoor-login-route for your test-login can improve test-speed by bypassing the login-step
in general acceptance-tests are slow. that's why i use them only for testing critical user workflows. i try to keep my whole test-suite within a 5 minute range. i really think that it's critical for your application test suite to be fast. that's why i try to put a lot of logic outside of rails tests so that a test-run completes within a second or less.

Related

How to write fast acceptance tests in Rails

I'm was writing in Java for some time and now I'm working with Rails for about one year. I was immediately put into a big project with a lot of specs (which was really new to me, I haven't used much of TDD/BDD technique earlier).
Ok, I learned the process and everything works fine, but now I would like to know how to make another step and make my specs a lot faster. In our project we have a lot of acceptance tests, where whe test a end-to-end functionality. For example user signs in, he click some button, the javascript popup appears and he submits the form. Easy functionality, but the problem is that these tests are really slow. We are using Rspec with Capybara and Selenium, so our tests has to open the browser, imitiate the user actions, wait for javascripts etc.
My question is: How to do this in a better, faster way? We have a lot of acceptance tests, and after running all specs in our project we have to wait about 30 minutes to finish.
Use capybara-webkit, it's faster as it doesn't actually open the site, it just imitates the flow internally and checks if everything is fine, faster compared to Selenium, this should help you get started
The previous answer of #Babar are fine: using capybara-webkit; but you can evaluate PhantomJS+poltergeist too. I'm using phantomJS+poltergeist and all test are improving time of execution and plus now I can detect JS error.

Browse a Rails App with the DB in Sandbox Mode?

I'm writing a lot of request specs right now, and I'm spending a lot of time building up factories. It's pretty cumbersome to make a change to a factory, run the specs, and see if I forgot about any major dependencies in my data. Over and over and over...
It makes me want to set up some sort of sandboxed environment, where I could browse the site and refresh the database from my factories at will. Has anyone ever done this?
EDIT:
I'm running spork and rspec-guard to make this easier, but I still lose a lot of time.
A large part of that time is spent waiting for Capybara/FireFox to spin up. These are request specs, and quite often there are some JavaScript components that need to be exercised as well.
You might look at a couple of solutions first:
You can run specific test files rather than the whole suite with something like rspec spec/request/foo_spec.rb. You can run a specific test with the -e option, or by appending :lineno to the filename, where lineno is the line number the test starts on.
You can use something like guard-rspec, watchr, or autotest to automatically run tests when their associated files change.
Tools like spork and zeus can be used to preload the app environment so that test suite runs take less time to run. However, I don't think this will reload factories so they may not apply here.
See this answer for ways that you can improve Rails' boot time. This makes running the suite substantially less painful.

Are there common reasons why Cucumber tests fail 60% of the time on otherwise passing functional code?

I recently started working on a project that has all passing cucumber tests. But I would say 60% of the time they fail on Timeouts, or just all together random intermittent errors. So roughly 1/4 times everything will pass and be green.
Are there common reasons for this sort of intermittence? Should I be concerned?
Acceptance tests may be something tricky on most of time.
You gotta check the async part of your code (Long database transactions, Ajax, MessageQueues). Put some timeout that makes sense for you, for the tests and for the build time (a long build time is not pretty good. I think that 10 minutes is acceptable, more than that, you can review your tests, if they are good enough).
Other problem is browser (if you're using it), it can take a lot of time to warm-up and start all tests.

Is it possible to simulate page requests in Rails using rake?

I've been working on a rails project that's unusual for me in a sense that it's not going to be using a MySQL database and instead will roll with mongoDB + Redis.
The app is pretty simple - "boot up" data from mongoDB to Redis, after which point rails will be ready to take requests from users which will consist mainly of pulling data from redis, (I was told it'd be pretty darn fast at this) doing a quick calculation and sending some of the data back out to the user.
This will be happening ~1500-4500 times per second, with any luck.
Before the might of the user army comes down on the server, I was wondering if there was a way to "simulate" the page requests somehow internally - like running a rake task to simply execute that page N times per second or something of the sort?
If not, is there a way to test that load and then time the requests to get a rough idea of the delay most users will be looking at?
Caveat
Performance testing is a very broad topic, and the right tool often depends on the type and quality of results that you need. As just one example of the issues you have to deal with, consider what happens if you write a benchmark spec for a specific controller action, and call that method 1000 times in a row. This might give a good idea of performance of that controller method, but it might be making the same redis or mongo query 1000 times, the results of which the database driver may be caching. This also ignores the time it'll take your web server to respond and serve up the static assets that are part of the request (this may be okay, especially if you have other tests for this).
Basic Tools
ab, or ApacheBench, is a simple commandline tool that you can use to test the throughput and speed of your app. I usually go to this first when I want to send a thousand requests at a web server, or test how many simultaneous requests my app can handle (e.g. when comparing mongrel, unicorn, thin, and goliath). Because all requests originate from the same server, this is good for a small number of requests, but as the number of requests grow, you'll be limited by the resources on your testing machine (network stack, cpu, and maybe memory).
Benchmark is a standard ruby class, and is great for quickly spitting out some profiling information. It can also be used with Test::Unit and RSpec. If you want a rake task for doing some quick benchmarking, this is probably the place to start
mechanize - I like using mechanize for quickly scripting an interaction with a page. It handles cookies and forms, but won't go and grab assets like images by default. It can be a good tool if you're rolling your own tests, but shouldn't be the first one to go to.
There are also some tools that will simulate actual users interacting with the site (they'll download assets as a browser would, and can be configured to simulate several different users). Most notable are The Grinder and Tsung. While still very much in development, I'm currently working on tsung-rails to make it easier to automate rails load testing with tsung, and would love some help if you choose to go in this direction :)
Rails Profiling Links
Good overview for writing performance tests
Great slide deck covering most of the latest tools for profiling at various levels

JRuby-friendly method for parallel-testing Rails app

I am looking for a system to parallelise a large suite of tests in a Ruby on Rails app (using rspec, cucumber) that works using JRuby. Cucumber is actually not too bad, but the full rSpec suite currently takes nearly 20 minutes to run.
The systems I can find (hydra, parallel-test) look like they use forking, which isn't the ideal solution for the JRuby environment.
We don't have a good answer for this kind of application right now. Just recently I worked on a fork of spork that allows you to keep a process running and re-run specs or features in it, provided you're using an app framework that supports code reloading (like Rails). Take a look at the jrubyhub application for an example of how I use Spork.
You might be able to spawn a spork instance for your application and then send multiple, threaded requests to it to run different specs. But then you're relying on RSpec internals to be thread-safe, and unfortunately I'm pretty sure they're not.
Maybe you could take my code as a starting point and build a cluster of spork instances, and then have a client that can distribute your test suite across them. It's not going to save memory and will still take a long time to start up, but if you start them all once and just re-use them for repeated runs, you might make some gains in efficiency.
Feel free to stop by user#jruby.codehaus.org or #jruby on freenode for more ideas.

Resources