I have spent hours and hours trying to configure spork so that it works for RSpec, works for Cucumber, reloads models so that it doesn't have to be restarted all the time and doesn't throw errors.
I've spent so much time researching solutions to its quirks that I might as well just have waited for the regular tests to load. Added to all of that it has the annoying characteristic that when I'm debugging I type commands into the terminal window I called Rspec from but the output gets displayed in the terminal window Spork is running in. Eesh.
I'm hugely appreciative of any piece of software that is produced for the help of others and of the spork project but just can't figure out whether it's worth labouring through further.
EDIT
YES - SPORK IS DEFINITELY WORTH THE EFFORT. After 4 days of setup I finally managed to sort out all of the issues and it's speeded up my testing incredibly. I really thoroughly recommend it.
I found out that Spork seems to work mostly OK if you follow the TDD/BDD pattern - that is, you write your test first, let it fail, and only then write the code. However, I don't always work this way - there are many situations where I need to write the code before writing the tests.
Fortunately, I found a nearly ideal solution to my testing needs - the Spin gem. It doesn't force you into into any workflow, and just works.
Give my CoreApp ago - it's a complete config of RSpec/Spork/Guard/Cucumber.
I find it's worthwhile considering it speeds up mosts test but the disadvantage then is my tests aren't engineered to be 'efficient' themselves. Some believe it's better to wait for the environment to load each time, but on my MBP it takes over 10-15 secs for the env to reload.
https://github.com/bsodmike/CoreApp
Related
My RSpec examples were running respectably until I migrated to a new Mac. Now, it seems the 1st test will take 1-3 minutes. The next time the tests are run (in a different order) that same test might take only 0.2 seconds.
rspec -p will show me which examples are slow that round but it doesn't help me debug WHY.
It might be a call to an outside service or, perhaps some external volume that's timing out because it's no longer attached to this machine, but I can't think of what or where the reference might be... other teammates testing same codebase don't have this problem.
Thoughts?
Edit (adding image of profiling results)
Does this help? I have no idea what cycle 5 means...
I'm writing a lot of request specs right now, and I'm spending a lot of time building up factories. It's pretty cumbersome to make a change to a factory, run the specs, and see if I forgot about any major dependencies in my data. Over and over and over...
It makes me want to set up some sort of sandboxed environment, where I could browse the site and refresh the database from my factories at will. Has anyone ever done this?
EDIT:
I'm running spork and rspec-guard to make this easier, but I still lose a lot of time.
A large part of that time is spent waiting for Capybara/FireFox to spin up. These are request specs, and quite often there are some JavaScript components that need to be exercised as well.
You might look at a couple of solutions first:
You can run specific test files rather than the whole suite with something like rspec spec/request/foo_spec.rb. You can run a specific test with the -e option, or by appending :lineno to the filename, where lineno is the line number the test starts on.
You can use something like guard-rspec, watchr, or autotest to automatically run tests when their associated files change.
Tools like spork and zeus can be used to preload the app environment so that test suite runs take less time to run. However, I don't think this will reload factories so they may not apply here.
See this answer for ways that you can improve Rails' boot time. This makes running the suite substantially less painful.
I recently started working on a project that has all passing cucumber tests. But I would say 60% of the time they fail on Timeouts, or just all together random intermittent errors. So roughly 1/4 times everything will pass and be green.
Are there common reasons for this sort of intermittence? Should I be concerned?
Acceptance tests may be something tricky on most of time.
You gotta check the async part of your code (Long database transactions, Ajax, MessageQueues). Put some timeout that makes sense for you, for the tests and for the build time (a long build time is not pretty good. I think that 10 minutes is acceptable, more than that, you can review your tests, if they are good enough).
Other problem is browser (if you're using it), it can take a lot of time to warm-up and start all tests.
I am looking for a system to parallelise a large suite of tests in a Ruby on Rails app (using rspec, cucumber) that works using JRuby. Cucumber is actually not too bad, but the full rSpec suite currently takes nearly 20 minutes to run.
The systems I can find (hydra, parallel-test) look like they use forking, which isn't the ideal solution for the JRuby environment.
We don't have a good answer for this kind of application right now. Just recently I worked on a fork of spork that allows you to keep a process running and re-run specs or features in it, provided you're using an app framework that supports code reloading (like Rails). Take a look at the jrubyhub application for an example of how I use Spork.
You might be able to spawn a spork instance for your application and then send multiple, threaded requests to it to run different specs. But then you're relying on RSpec internals to be thread-safe, and unfortunately I'm pretty sure they're not.
Maybe you could take my code as a starting point and build a cluster of spork instances, and then have a client that can distribute your test suite across them. It's not going to save memory and will still take a long time to start up, but if you start them all once and just re-use them for repeated runs, you might make some gains in efficiency.
Feel free to stop by user#jruby.codehaus.org or #jruby on freenode for more ideas.
I've been using ZenTest to run all the tests in my Rails project for years and it's always been quite nippy. However, on my Mac it has suddenly started taking 3 times as long to run all the tests. We have 1219 tests and for the past year it would run all the tests in around 300 seconds on average. Now though, it's taking almost 900 seconds:
Finished in 861.3578 seconds.
1219 tests, 8167 assertions, 0 failures, 0 errors
==============================================================================
I can't think of any reason why such a slowdown would occur. I've tried updating to the latest gem version, reducing the log output from the tests and regenerating the test database, all to no avail. Can anyone suggest a way to improve the performance?
When you have eliminated the impossible, whatever remains, however improbable, must be the explanation: if it's not the gem, not the database (did you check indexes ?), not your Mac, not Rails (did you upgrade recently), could it be the code ?
I'd check git/svn/cvs logs for the few most recent changes you made, and look for anything that might e.g. be slowing down queries.
If you can't find anything right away, profile the code to see where the time is going. This will be slower than just remembering something you did change (which almost always turns out to be the explanation in this kind of situation), but might point you in the right direction.
Performance issues can be frustrating because any number of factors can have an impact. A missing index on the DB. Network latency. Low memory conditions. Don't give up, keep Tilton's Law in mind.
You are really going to do a little bit more homework here, I doubt its ZenTest:
Grab a version of your code when stuff was great and dandy a few months ago. Run all the tests, output all the test durations to a spreadsheet or something.
Grab a current version of your code base and repeat the process in 1)
If durations are the same, something about your DB configuration or machine config has changed
If all the tests are slower on average this is a hard one to diagnose, but it would seem that there is a new bit of code thats running in each test.
If a handful of new tests are really slow, fix them.
So I finally solved this. Here's how in three easy steps:
Insert OSX Leopard CD
Completely reinstall Leopard from scratch
Reinstall Ruby, MySQL etc
After doing this the tests run in under 260 seconds.
I have no idea what happened but it certainly seems to have been a MySQL issue somewhere.