Cucumber features are hanging on console - ruby-on-rails

In my rails application, I have 2000 lines of code for cucumber features.
Now I am running all of the features at once using command rake rcov:features for getting coverage report.
I observed that while running all at once, they hang at some of the features and, because of this, are not generating the coverage report.
Please suggest, what are the possibilities of getting hanged?

I've seen this happen when the code depended on modernizer, and it was removed. I have also seen this happen when an incompatible/unbuildable server is specified in the gemfile (in this case, a broken build of thin on Windows). I have also seen machines with issues using selenium, and none using capybara-webkit, and vice-versa. Basically, there are about a million things that can go wrong, it seems to me that rails testing in general will benefit from additional polish and improved interaction. I wonder if you would have an easier time starting off small, instead of trying to find out exactly where in the 2000 lines it is all at once, perhaps it would be easier to remove all but a little bit of the code, and add it in slowly, until something fails. You could do the same thing, using your git repo, if this has worked in the past. Break it up into a smaller, simper, and more digestible project.

Related

Testing legacy app in RoR - skip the DB tasks

I'm a PHP dev mostly and just starting out with Ruby, so please correct me if I say something dumb.
I'm working on fixing some bugs in a "legacy" app written in rails. The app itself has never been unit-tested. I can see the test scaffolding generated by rails but tests are nowhere to be found.
The app is pretty big and the code quality is very bad, so I wanted to write some units tests for the functionality I'll be fixing, before writing any code.
The problem is that when I run rake test command, the testing DB is created, but if I write any tests it keeps crashing on me. There are several problems with some relations and keys which I tried to fix, but more problems just keep appearing. I do understand that the DB is created with schema.rb file, but I'm sure it is just outdated by now. It is another issue I will maybe fix, but for now I just want to write some basic unit tests not even using the DB itself.
So the question is: is it possible to write just unit tests for some methods without invoking all the test DB scripts? I'm aware that this is maybe not the best practice, but I will feel better modifying the app with some test coverage and starting with fixing the DB I do not yet understand seems like a bad idea to me.
I'm using Ruby version 2.1.10 and the app is written in Rails 4.0.4 - these seem to be the latest versions I managed to run the app on.

How can I decrease my Rails test overhead?

I'm using Test::Unit on a large app with a large number of gem dependencies (>75). I'm trying to develop using BDD, but it takes minutes for the app to load it's dependencies before it can run the tests. Is there a way to preload the dependencies and just auto-run the test on changes, or a similar solution?
I would look into Spork. It works wonders.
https://github.com/sporkrb/spork
https://github.com/sporkrb/spork-testunit
I am using RSpec and there's a great tool for it, called Spork. It basically loads your app once and then just reloads modified parts. If you combine it with Guard, you get "continuous testing". That is, you hit 'Save' in your editor and tests start executing, giving you instant feedback. This still amazes me after some months :)
Edit
As #THEM points out, there's a plugin for Spork to support TestUnit. You should look into it.
There was also an interesting article about test speed on the 37Signals blog a while back. Might be of interest even if you end up going with Spork or another solution.

Why are my cucumber scenarios failing when steps are run together, but pass when run singularly?

When I run my cucumber scenarios as a whole, or with the command: cucumber
I get 7 failing steps. When I run them individually with the work in progress tag they pass fine.
I don't think it's a database state issue.. I'm running with transactions and I also tried running without and cleaning the database with database cleaner.... still does not help.
I tried to run the debugger but it does not seem to work when I run the command cucumber. It only works when I run with the work in progress tag: cucumber -p wip
I thought it might be that things are running too fast and capybara is not checking things properly?
Any ideas?
Eureka! I've been having this same problem for awhile now - my tests got slower and slower the more I added - also, some tests would fail randomly, but only when run as a whole suite - after my tests would finish I would just run the feature again and viola! all passing. Very frustrating - but the MOST frustrating part was the speed - recently I upgraded to snow leopard and compiled everything to 64bits. The result? My tests went from taking 7 minutes to 32!
There's a clue in that however - 64 bit apps use more memory to do do the same thing, apparently - however, when I was running my tests the memory on my machine was never coming close to maxing out. Hint #2? Webrat was going fast, it was only when using culerity/celerity to test javascript that things were really slowing down.
After poking around I found out that jruby tells java to give it a maximum 'heap size' of 512 mbs. JRuby allows you to set java options when it is invoked, and culerity allows an environment variable to invoke jruby any way you like. Sure enough, around that time, java would stop consuming memory and the processor would try to set itself on fire. So are you ready? Here it is:
JRUBY_INVOCATION="jruby -J-Xmx1024m" cucumber
That increased my heap size to a gigabyte and my test time dropped down to 7 minutes! Is that it? Did I get it? I sure hope it helps!

Is anyone actually running plugin tests/specs in their Rails applications?

We've recently upgraded our Rails application.
To be extra sure everything works, I've tried to get the tests and specs of the various used plugins (26 at current count) to work, thinking then to add those to our continuous integration, which only runs the main application's specs.
I've run into a lot of problems even getting the specs/tests to run at all, not even getting to any individual test failures. For example, I've run across this problem: http://rails_security.lighthouseapp.com/projects/15332/tickets/7-rake-spec-plugin-fails-on-rails-2-1 (thanks by the way for that ticket, even though the issue wasn't fixed).
So the question is: Are we unusual in that we've ever cared about running plugin tests ? It doesn't seem to feature much here on SO. My nagging feeling is that they should be run as much as the main specs, but you could also argue that since the main specs work, the plugins must also work.
Alot of it depends on the plugin/gem being used.
If I know the author/community of the gem is competant I will skip the tests and simply use the latest stable release and freeze that gem. I will then track the progress of the development using github.
If the plugin/gem is written by an unknown party I will run the tests and freeze the gem/plugin and again monitor the development.
Sometimes however I will write my own contributions to the gem and fork the code. I will clone the repo in github and base my installations from that. At which point any and all changes result in a complete test run.
With all things in the open source world there is an element of trust between the creator and the users of those pieces of code. The tests themselves don't tell me much about the codebase, it shows there are tests and thats it. Do they test everything ? Are there edge cases ? . Its this element of trust I have with certain developers in the community that means I forgo worrying over running tests for those gems.
Its a slippery slope testing everything, where does it stop ? Would you test rails every release ? No, you assume the community has done this for you already.

How hard is it to upgrade from Rails 1.2.3 to 2.3.5?

Is it even worth it?
I'm working on assessing a legacy code base for a client -- the source code has been largely untouched since 2007 and it's built with Rails 1.2.3.
My Rails experience began at version 2.1 -- the code is fairly stock/scaffold like and devoid of meaningful tests -- I was curious to even see if I could get it running locally -- but, I'm not even sure where to start. Right off it doesn't even know what 'rake db:create' means. Ha!
Is it going be a major pain to even getting it running in 2.3.5? Should I bother?
Would love to hear your thoughts.
Thanks
If you're going to be actively developing the site, then yes, it is worth sinking the time into the project to bring it up to date. A lot has happened since Rails 1.2 which will make development a much more pleasant experience. Life without named scopes or RESTful resources is really difficult. If you're just patching the odd thing here and there, it may be worth leaving it mostly as-is and just dealing with the eccentricities.
Since 1.2.3 is just prior to the releases building up to 2.0 where a lot of warnings and deprecation notices were introduced, you could have quite a chore.
Some things to keep an eye out for:
Migrations are now date-tagged, not numbered, but are at least backwards compatible
Many vendor/plugins may not work, have no 2.x compatible version, or need to be upgraded
The routing engine has changed, and the name of many routes may have changed, so see what rake:routes says and get ready for a lot of search-and-replace
I did this for a client with a smallish site. First, version control is your friend. Make sure you have the entire codebase committed.
Next, the basic recipe is as follows
Tag the current source
Update to the next release of rails (you'll have to google for the release announcement). My app was frozen, so I just had to freeze to that version
rake rails:update to update the config, scripts and js
Diff your working copy against the version in your scm. Make any changes necessary for the app
Update any gems/plugins if necessary
Start the app, exercise and test. Look for deprecation notices
When it all looks good, commit to scm and tag
Lather, rinse, repeat
For my client's app, it was much easier than I thought.

Resources