Today I am going to get as far as I can setting up my testing environment and workflow. I'm looking for practical advice on how to setup the test environment from you guys who are very passionate and versed in Ruby Testing.
By the end of the day (6am PST?) I would like to be able to:
Type one 1-command to run test suites for ANY project I find on Github.
Run autotest for ANY Github project so I can fork and make TESTABLE contributions.
Build gems from the ground up with Autotest and Shoulda.
For one reason or another, I hardly ever run tests for projects I clone from Github. The major reason is because unless they're using RSpec and have a Rake task to run the tests, I don't see the common pattern behind it all.
I have built 3 or 4 gems writing tests with RSpec, and while I find the DSL fun, it's less than ideal because it just adds another layer/language of methods I have to learn and remember. So I'm going with Shoulda. But this isn't a question about which testing framework to choose.
So the questions are:
What is your, the SO reader and Github project committer, test environment setup using autotest so that whenever you git clone a gem, you can run the tests and autotest-develop them if desired?
What are the guys who are writing the Paperclip Tests and Authlogic Tests doing? What is their setup?
Thanks for the insight. There are tons of resources describing how to use the different testing frameworks, but almost nothing on the actual setup and workflow. Looking for answers that will make me a more effective tester.
The most common convention probably is rake test, rake spec, or maybe even just rake.
Of course, there is no question that this will fail with many projects, in particular the ones without tests or specs.
It might be possible to parse the output of rake -T if a Rakefile is there, and act on that, but there really is no way you will cover ALL projects on GitHub.
Related
So despite the fact I've created 3 rails apps of increasing complexity on my own. I haven't deployed any of them. I decided tonight to create another app (Well really redoing an older app with some new things I've learned) and it hit me.
I really don't completely understand the gemfile, well I do but I second guess myself quite a bit. It seems obvious and straightforward, yet here I am on Stackoverflow asking for clarification or at least to ease my second guessing.
My understanding of Gemfile is as follows.
Default are gems that are gems that will persist across all environments (Test, dev, and production), test gems only live in the test environment, dev gems in the development environment, and production gems in the production environment.
Up to this point I've kinda just been throwing most of my gems in the default, but I want to correct this before I make it a habit. I like using rails' built in testing (I know, boo, hiss) and I use minitest reporters, guard minitest and minit backtrace as my helpers. I'm told best practice would be to put them and any gem related to testing in the obvious test environment. I don't think I've ever setup a test db, much less used test environment. Why wouldn't those go under the development environment? Or is it when you run tests that's using a test environment even if you didn't explicitly create one?
Is it when you run tests that's using a test environment even if you didn't explicitly create one?
$ rails console test brings you to the test environment
Is there any way to retest previously broken tests?
So, say, I run rspec and several tests in different files and directories fail.
I fix something and now I have to manually specify all files and folders I want to retest or run tests for whole project again(It takes considerable amount of time for big projets).
What I was looking for is something like a flag
rspec --prev-failed-only
I realize that such flag would require considerable amount of additional actions from rspec, like storing results of previous tests and so on. But I think it would be super convenient to me.
Is there any such(or similar) tool/gem?
rspec-rerun gem does what you want : https://github.com/dblock/rspec-rerun
https://github.com/rspec/rspec-core/issues/456 has a good discussion on the topic of making rspec itself be able to rerun failed tests.
On the Giant Robots podcast, Sam Phippen of the core team mentioned this feature is due to be added to RSpec soon.
In case anyone else finds this, a month after the question was asked (open source <3 ) rspec 3.3 introduced the --only-failures option, also the delightfully handy --next-failure (-n) option. rspec --help for more info.
We've recently upgraded our Rails application.
To be extra sure everything works, I've tried to get the tests and specs of the various used plugins (26 at current count) to work, thinking then to add those to our continuous integration, which only runs the main application's specs.
I've run into a lot of problems even getting the specs/tests to run at all, not even getting to any individual test failures. For example, I've run across this problem: http://rails_security.lighthouseapp.com/projects/15332/tickets/7-rake-spec-plugin-fails-on-rails-2-1 (thanks by the way for that ticket, even though the issue wasn't fixed).
So the question is: Are we unusual in that we've ever cared about running plugin tests ? It doesn't seem to feature much here on SO. My nagging feeling is that they should be run as much as the main specs, but you could also argue that since the main specs work, the plugins must also work.
Alot of it depends on the plugin/gem being used.
If I know the author/community of the gem is competant I will skip the tests and simply use the latest stable release and freeze that gem. I will then track the progress of the development using github.
If the plugin/gem is written by an unknown party I will run the tests and freeze the gem/plugin and again monitor the development.
Sometimes however I will write my own contributions to the gem and fork the code. I will clone the repo in github and base my installations from that. At which point any and all changes result in a complete test run.
With all things in the open source world there is an element of trust between the creator and the users of those pieces of code. The tests themselves don't tell me much about the codebase, it shows there are tests and thats it. Do they test everything ? Are there edge cases ? . Its this element of trust I have with certain developers in the community that means I forgo worrying over running tests for those gems.
Its a slippery slope testing everything, where does it stop ? Would you test rails every release ? No, you assume the community has done this for you already.
Up until now I've been deploying Rails apps to our Apache/Passenger setup using a simple Rake task that I wrote. I haven't tried to mess around with Capistrano or Vlad the Deployer.
However, now more developers are coming on board, and I'm interesting in arranging things so that the deployment process runs the tests first and won't deploy unless they all pass. So I'm revisiting the question.
It's been a while since I looked into this. What are most people doing these days? Still using Capistrano? Writing individual Rake tasks? Something else?
Capistrano is still the standard for typical Rails deployments, yes.
We're using Capistrano and Integrity for a CI server. Integrity is quite easy to hack on and you could really easily set it up to automatically deploy on a pass of all tests, and I'd recommend all of them as good tools; Integrity has plenty of plugins available. We currently have Integrity spit out each build's pass/fail and code coverage % into an IRC channel and manually deploy.
What tools do you use for automated code sanity checks and adhering to the coding conventions in your Ruby apps? How do you incorporate them into your process? (I mean tools like roodi, reek, heckle, rcov, dcov, etc.)
I'd suggest taking a look at RuboCop. It is a Ruby code style checker based on the Ruby Style Guide. It's maintained pretty actively and it's based on standard Ruby tooling (like the ripper library). It works well with Ruby 1.9 and 2.0 and has great Emacs integration.
The metric_fu gem might be perfect for what you need. From it's README: "Metric-fu is a set of rake tasks that make it easy to generate metrics reports. It uses Saikuro, Flog, Rcov, and Rails'
built-in stats task to create a series of reports. It's designed to integrate easily with CruiseControl.rb by placing files in the Custom Build Artifacts folder." Since they converted it to a gem, it works with non-Rails applications as well. I'll bet you could add hooks for other tools as well.
There was some good discussion on this topic on the On-Ruby blog recently. For my personal development process I build quality tools into my tests, but only after all other tests have run. So I have a top-level rake task that looks something like this:
desc 'Runs all unit tests, acceptance tests and quality checks'
task 'test' => ['test:spec', 'test:features', 'test:quality']
I allow myself to commit if the last suite "fails", but I do try to get them to zero at least once each day.