We've recently upgraded our Rails application.
To be extra sure everything works, I've tried to get the tests and specs of the various used plugins (26 at current count) to work, thinking then to add those to our continuous integration, which only runs the main application's specs.
I've run into a lot of problems even getting the specs/tests to run at all, not even getting to any individual test failures. For example, I've run across this problem: http://rails_security.lighthouseapp.com/projects/15332/tickets/7-rake-spec-plugin-fails-on-rails-2-1 (thanks by the way for that ticket, even though the issue wasn't fixed).
So the question is: Are we unusual in that we've ever cared about running plugin tests ? It doesn't seem to feature much here on SO. My nagging feeling is that they should be run as much as the main specs, but you could also argue that since the main specs work, the plugins must also work.
Alot of it depends on the plugin/gem being used.
If I know the author/community of the gem is competant I will skip the tests and simply use the latest stable release and freeze that gem. I will then track the progress of the development using github.
If the plugin/gem is written by an unknown party I will run the tests and freeze the gem/plugin and again monitor the development.
Sometimes however I will write my own contributions to the gem and fork the code. I will clone the repo in github and base my installations from that. At which point any and all changes result in a complete test run.
With all things in the open source world there is an element of trust between the creator and the users of those pieces of code. The tests themselves don't tell me much about the codebase, it shows there are tests and thats it. Do they test everything ? Are there edge cases ? . Its this element of trust I have with certain developers in the community that means I forgo worrying over running tests for those gems.
Its a slippery slope testing everything, where does it stop ? Would you test rails every release ? No, you assume the community has done this for you already.
Related
I'm a PHP dev mostly and just starting out with Ruby, so please correct me if I say something dumb.
I'm working on fixing some bugs in a "legacy" app written in rails. The app itself has never been unit-tested. I can see the test scaffolding generated by rails but tests are nowhere to be found.
The app is pretty big and the code quality is very bad, so I wanted to write some units tests for the functionality I'll be fixing, before writing any code.
The problem is that when I run rake test command, the testing DB is created, but if I write any tests it keeps crashing on me. There are several problems with some relations and keys which I tried to fix, but more problems just keep appearing. I do understand that the DB is created with schema.rb file, but I'm sure it is just outdated by now. It is another issue I will maybe fix, but for now I just want to write some basic unit tests not even using the DB itself.
So the question is: is it possible to write just unit tests for some methods without invoking all the test DB scripts? I'm aware that this is maybe not the best practice, but I will feel better modifying the app with some test coverage and starting with fixing the DB I do not yet understand seems like a bad idea to me.
I'm using Ruby version 2.1.10 and the app is written in Rails 4.0.4 - these seem to be the latest versions I managed to run the app on.
In my rails application, I have 2000 lines of code for cucumber features.
Now I am running all of the features at once using command rake rcov:features for getting coverage report.
I observed that while running all at once, they hang at some of the features and, because of this, are not generating the coverage report.
Please suggest, what are the possibilities of getting hanged?
I've seen this happen when the code depended on modernizer, and it was removed. I have also seen this happen when an incompatible/unbuildable server is specified in the gemfile (in this case, a broken build of thin on Windows). I have also seen machines with issues using selenium, and none using capybara-webkit, and vice-versa. Basically, there are about a million things that can go wrong, it seems to me that rails testing in general will benefit from additional polish and improved interaction. I wonder if you would have an easier time starting off small, instead of trying to find out exactly where in the 2000 lines it is all at once, perhaps it would be easier to remove all but a little bit of the code, and add it in slowly, until something fails. You could do the same thing, using your git repo, if this has worked in the past. Break it up into a smaller, simper, and more digestible project.
For testing purposes for Ruby/Rails I use RSpec and I have well over 1000 tests that are already written in my application. Once the app goes live I want to have the tests run once a day on the production server (since all the code is there) and see if anything fails. The test itself will be run against test data and now production data. This is an effort so that any new code that is added in the future will not cause anything to unknowingly break.
I have found a few solutions:
Travis-CI (only open-source ... not suitable for closed-source projects)
Jenkins-CI (not sure if it works with or well with Ruby/RSpec/Rails)
Watir (not sure if Ruby/Rspec works with it, but the tool itself is written in Ruby).
Preferably something that checks the codebase daily and then emails me when something isn't working.
I also plan on integrating JavaScript testing with the a testing library of my choice (I just need the automation platform for testing it).
Can someone provide me some insight as to which tool to use? Or does anyone have any other tools to recommend?
Jenkins-CI works great with rspec, and can run your jasmine javascript testing, and your cucumber javascript tests as well.
The only thing I'd recommend is not to test on your production server itself. When you push changes to your source-control repository, Jenkins will download the new code and run your tests there. When you're green (tests pass), push the code to production.
I'm using Test::Unit on a large app with a large number of gem dependencies (>75). I'm trying to develop using BDD, but it takes minutes for the app to load it's dependencies before it can run the tests. Is there a way to preload the dependencies and just auto-run the test on changes, or a similar solution?
I would look into Spork. It works wonders.
https://github.com/sporkrb/spork
https://github.com/sporkrb/spork-testunit
I am using RSpec and there's a great tool for it, called Spork. It basically loads your app once and then just reloads modified parts. If you combine it with Guard, you get "continuous testing". That is, you hit 'Save' in your editor and tests start executing, giving you instant feedback. This still amazes me after some months :)
Edit
As #THEM points out, there's a plugin for Spork to support TestUnit. You should look into it.
There was also an interesting article about test speed on the 37Signals blog a while back. Might be of interest even if you end up going with Spork or another solution.
The specs for my rails project have been really slow lately. I did a git bisect to see if I could determine what has been slowing it and I found that certain commits that were previously running just fine are now just as slow as the current HEAD.
This leads me to believe that my problem is being caused by a gem updating or something else that's not under my source control. The problem still occurs on other dev machines so I don't think it's my personal environment either.
What's the best way to track down my slowest tests and then figure out what's slowing them down so much?
This flag will tell you which tests are the bottlenecks:
$ rspec --profile
Check out the test prof gem:
https://test-prof.evilmartians.io/#/?id=recipes
https://github.com/palkan/test-prof