How can I decrease my Rails test overhead? - ruby-on-rails

I'm using Test::Unit on a large app with a large number of gem dependencies (>75). I'm trying to develop using BDD, but it takes minutes for the app to load it's dependencies before it can run the tests. Is there a way to preload the dependencies and just auto-run the test on changes, or a similar solution?

I would look into Spork. It works wonders.
https://github.com/sporkrb/spork
https://github.com/sporkrb/spork-testunit

I am using RSpec and there's a great tool for it, called Spork. It basically loads your app once and then just reloads modified parts. If you combine it with Guard, you get "continuous testing". That is, you hit 'Save' in your editor and tests start executing, giving you instant feedback. This still amazes me after some months :)
Edit
As #THEM points out, there's a plugin for Spork to support TestUnit. You should look into it.

There was also an interesting article about test speed on the 37Signals blog a while back. Might be of interest even if you end up going with Spork or another solution.

Related

RSpec "retest broken tests from last time" flag

Is there any way to retest previously broken tests?
So, say, I run rspec and several tests in different files and directories fail.
I fix something and now I have to manually specify all files and folders I want to retest or run tests for whole project again(It takes considerable amount of time for big projets).
What I was looking for is something like a flag
rspec --prev-failed-only
I realize that such flag would require considerable amount of additional actions from rspec, like storing results of previous tests and so on. But I think it would be super convenient to me.
Is there any such(or similar) tool/gem?
rspec-rerun gem does what you want : https://github.com/dblock/rspec-rerun
https://github.com/rspec/rspec-core/issues/456 has a good discussion on the topic of making rspec itself be able to rerun failed tests.
On the Giant Robots podcast, Sam Phippen of the core team mentioned this feature is due to be added to RSpec soon.
In case anyone else finds this, a month after the question was asked (open source <3 ) rspec 3.3 introduced the --only-failures option, also the delightfully handy --next-failure (-n) option. rspec --help for more info.

Cucumber features are hanging on console

In my rails application, I have 2000 lines of code for cucumber features.
Now I am running all of the features at once using command rake rcov:features for getting coverage report.
I observed that while running all at once, they hang at some of the features and, because of this, are not generating the coverage report.
Please suggest, what are the possibilities of getting hanged?
I've seen this happen when the code depended on modernizer, and it was removed. I have also seen this happen when an incompatible/unbuildable server is specified in the gemfile (in this case, a broken build of thin on Windows). I have also seen machines with issues using selenium, and none using capybara-webkit, and vice-versa. Basically, there are about a million things that can go wrong, it seems to me that rails testing in general will benefit from additional polish and improved interaction. I wonder if you would have an easier time starting off small, instead of trying to find out exactly where in the 2000 lines it is all at once, perhaps it would be easier to remove all but a little bit of the code, and add it in slowly, until something fails. You could do the same thing, using your git repo, if this has worked in the past. Break it up into a smaller, simper, and more digestible project.

Track down what's causing slow rspec tests

The specs for my rails project have been really slow lately. I did a git bisect to see if I could determine what has been slowing it and I found that certain commits that were previously running just fine are now just as slow as the current HEAD.
This leads me to believe that my problem is being caused by a gem updating or something else that's not under my source control. The problem still occurs on other dev machines so I don't think it's my personal environment either.
What's the best way to track down my slowest tests and then figure out what's slowing them down so much?
This flag will tell you which tests are the bottlenecks:
$ rspec --profile
Check out the test prof gem:
https://test-prof.evilmartians.io/#/?id=recipes
https://github.com/palkan/test-prof

Speeding up rails tests during development by keeping Rails in memory?

When running rspec tests on my rails app using "rake" or "rake spec", it takes a long time to initialize Rails and start running tests.
Is there a way to keep Rails loaded in memory and run tests against it? I'm hoping there's something similar to using "grails interactive" like this How to speed up grails test execution but for Rails.
It's almost what Spork is all about.
See the most awesome railscast "How I test" http://railscasts.com/episodes/275-how-i-test
This uses guard to run a relevant test every time you save a file so you know almost instantly if the code you have written has caused a pass or a fail. Also if you are running on linux then you can use the libnotify and libnotify-rails gems which pop up a window for you (totally unobtrusively) indicating a pass or failure so you don't have to keep checking the console.
There is a Railscast called Spork that specifically discusses this problem at http://railscasts.com/episodes/285-spork and offers an excellent solution with a detailed walkthrough.

Is anyone actually running plugin tests/specs in their Rails applications?

We've recently upgraded our Rails application.
To be extra sure everything works, I've tried to get the tests and specs of the various used plugins (26 at current count) to work, thinking then to add those to our continuous integration, which only runs the main application's specs.
I've run into a lot of problems even getting the specs/tests to run at all, not even getting to any individual test failures. For example, I've run across this problem: http://rails_security.lighthouseapp.com/projects/15332/tickets/7-rake-spec-plugin-fails-on-rails-2-1 (thanks by the way for that ticket, even though the issue wasn't fixed).
So the question is: Are we unusual in that we've ever cared about running plugin tests ? It doesn't seem to feature much here on SO. My nagging feeling is that they should be run as much as the main specs, but you could also argue that since the main specs work, the plugins must also work.
Alot of it depends on the plugin/gem being used.
If I know the author/community of the gem is competant I will skip the tests and simply use the latest stable release and freeze that gem. I will then track the progress of the development using github.
If the plugin/gem is written by an unknown party I will run the tests and freeze the gem/plugin and again monitor the development.
Sometimes however I will write my own contributions to the gem and fork the code. I will clone the repo in github and base my installations from that. At which point any and all changes result in a complete test run.
With all things in the open source world there is an element of trust between the creator and the users of those pieces of code. The tests themselves don't tell me much about the codebase, it shows there are tests and thats it. Do they test everything ? Are there edge cases ? . Its this element of trust I have with certain developers in the community that means I forgo worrying over running tests for those gems.
Its a slippery slope testing everything, where does it stop ? Would you test rails every release ? No, you assume the community has done this for you already.

Resources