How do I set up NCrunch to run nspec tests - bdd

I'm struggling to set up NCrunch to run my nspec tests automatically. On the ncrunch forums it says this functionality has not been implemented yet, but then MattFlo says he prefers using NCrunch, so I'm pretty sure it can be made to work. Help would be greatly appreciated!

We are working on getting NCrunch fully supported.
For now you can use the DebuggerShim (it's a cs file included with NSpec) as a shim to run it via NCrunch. The DebuggerShim is pretty much an NUnit test that runs NSpec tests.
You may want to take a look at specwatchr. Matt likes to use NCruch, but I find that it's to eager to run my tests. I have to consciously STOP typing to give NCrunch a chance to run my tests...I'd rather just hit the save button and have a background process run my tests for me (ie specwatchr). Hope that helps.
Amir (Hacker on NSpec)
NUnit extension for NSpec: https://github.com/ddaysoftware/NSpec4NUnit

Related

RSpec "retest broken tests from last time" flag

Is there any way to retest previously broken tests?
So, say, I run rspec and several tests in different files and directories fail.
I fix something and now I have to manually specify all files and folders I want to retest or run tests for whole project again(It takes considerable amount of time for big projets).
What I was looking for is something like a flag
rspec --prev-failed-only
I realize that such flag would require considerable amount of additional actions from rspec, like storing results of previous tests and so on. But I think it would be super convenient to me.
Is there any such(or similar) tool/gem?
rspec-rerun gem does what you want : https://github.com/dblock/rspec-rerun
https://github.com/rspec/rspec-core/issues/456 has a good discussion on the topic of making rspec itself be able to rerun failed tests.
On the Giant Robots podcast, Sam Phippen of the core team mentioned this feature is due to be added to RSpec soon.
In case anyone else finds this, a month after the question was asked (open source <3 ) rspec 3.3 introduced the --only-failures option, also the delightfully handy --next-failure (-n) option. rspec --help for more info.

How can I decrease my Rails test overhead?

I'm using Test::Unit on a large app with a large number of gem dependencies (>75). I'm trying to develop using BDD, but it takes minutes for the app to load it's dependencies before it can run the tests. Is there a way to preload the dependencies and just auto-run the test on changes, or a similar solution?
I would look into Spork. It works wonders.
https://github.com/sporkrb/spork
https://github.com/sporkrb/spork-testunit
I am using RSpec and there's a great tool for it, called Spork. It basically loads your app once and then just reloads modified parts. If you combine it with Guard, you get "continuous testing". That is, you hit 'Save' in your editor and tests start executing, giving you instant feedback. This still amazes me after some months :)
Edit
As #THEM points out, there's a plugin for Spork to support TestUnit. You should look into it.
There was also an interesting article about test speed on the 37Signals blog a while back. Might be of interest even if you end up going with Spork or another solution.

Is anyone actually running plugin tests/specs in their Rails applications?

We've recently upgraded our Rails application.
To be extra sure everything works, I've tried to get the tests and specs of the various used plugins (26 at current count) to work, thinking then to add those to our continuous integration, which only runs the main application's specs.
I've run into a lot of problems even getting the specs/tests to run at all, not even getting to any individual test failures. For example, I've run across this problem: http://rails_security.lighthouseapp.com/projects/15332/tickets/7-rake-spec-plugin-fails-on-rails-2-1 (thanks by the way for that ticket, even though the issue wasn't fixed).
So the question is: Are we unusual in that we've ever cared about running plugin tests ? It doesn't seem to feature much here on SO. My nagging feeling is that they should be run as much as the main specs, but you could also argue that since the main specs work, the plugins must also work.
Alot of it depends on the plugin/gem being used.
If I know the author/community of the gem is competant I will skip the tests and simply use the latest stable release and freeze that gem. I will then track the progress of the development using github.
If the plugin/gem is written by an unknown party I will run the tests and freeze the gem/plugin and again monitor the development.
Sometimes however I will write my own contributions to the gem and fork the code. I will clone the repo in github and base my installations from that. At which point any and all changes result in a complete test run.
With all things in the open source world there is an element of trust between the creator and the users of those pieces of code. The tests themselves don't tell me much about the codebase, it shows there are tests and thats it. Do they test everything ? Are there edge cases ? . Its this element of trust I have with certain developers in the community that means I forgo worrying over running tests for those gems.
Its a slippery slope testing everything, where does it stop ? Would you test rails every release ? No, you assume the community has done this for you already.

How can I run Ruby specs and/or tests in MacVim without locking up MacVim?

About 6 months ago I switched from TextMate to MacVim for all of my development work, which primarily consists of coding in Ruby, Ruby on Rails and JavaScript.
With TextMate, whenever I needed to run a spec or a test, I could just command+R on the test or spec file and another window would open and the results would be displayed with the 'pretty' format applied. If the spec or test was a lengthy one, I could just continue working with the codebase since the test/spec was running in a separate process/window. After the test ran, I could click through the results directly to the corresponding line in the spec file.
Tim Pope's excellent rails.vim plugin comes very close to emulating this behavior within the MacVim environment. Running :Rake when the current buffer is a test or spec runs the file then splits the buffer to display the results. You can navigate through the results and key through to the corresponding spot in the file.
The problem with the rails.vim approach is that it locks up the MacVim window while the test runs. This can be an issue with big apps that might have a lot of setup/teardown built into the tests. Also, the visual red/green html results that TextMate displays (via --format pretty, I'm assuming) is a bit easier to scan than the split window.
This guy came close about 18 mos ago: http://cassiomarques.wordpress.com/2009/01/09/running-rspec-files-from-vim-showing-the-results-in-firefox/ The script he has worked with a bit of hacking, but the tests still ran within MacVim and locked up the current window.
Any ideas on how to fully replicate the TextMate behavior described above in MacVim?
Thanks!
There is a plugin called vim-addon-background-cmd that can allow you to run tasks in the background instead of locking up the vim interface. You would have to create the call to run through the background command. See the docs for more information on how to do that.
A few months back I was looking for this same exact thing. Then I discovered autotest with rspec. Now I keep a separate terminal window open which shows my last run tests. If I change any relavent code files my tests are automatically run for me (the files are watched and if they change the tests run).
If you want the same autotest type behavior in a non-rails project you can look at the watchr gem. It's functionality is similar to autotest but you can use it in ANY framework.

How can I have autospec/test not run the full test suite after everything goes green?

Same question as waloeiii in twitter:
How can I have autospec/test not run
the full test suite after everything
goes green? The next test I write will
be red!
I'd rather run the full test suite manually.
By the way, I tried adding a failing spec:
it "should flunk" do
flunk
end
but autospec seems to ignore it when it feels like it.
Bit late but I was looking for this as well so thought I'd post my solution:
Add the following to ~/.autotest:
class Autotest
def rerun_all_tests
end
end
Are you sure you are not confused about the intended behaviour of autotest's heuristics?
My understanding is that it runs tests for what has changed and will keep running failed tests until they pass and then once they pass it runs the whole test suite to make sure nothing else broke.
In effect it is being conservative and making sure you haven't introduced side effects that break other unrelated tests which is probably a good thing. The problem of course is that if you are doing fast red - green cycles you are going to be running your full suite a lot.
If you want to change these behaviours you need to edit the heuristics in the rails_autotest.rb file for zentest.
You can use the following option to avoid this behavior -
autospec --no-full-after-failed
I think that this is by design - if you fix a failing spec, and all other specs in the section are green, then autospec will rerun the entire suite - this will tell you if the fix you applied to one area of your project has b0rked another or not.
If you just want to run the specs you are working on at any one time, then you can do it from the command line:
ruby spec/controllers/my_spec.rb
or from within Textmate by pressing cmd+r from your spec file. You should rerun your entire suite as you go anyway, otherwise you might be missing failing specs.

Resources