First RSpec test incredibly slow after migration to new machine - ruby-on-rails

My RSpec examples were running respectably until I migrated to a new Mac. Now, it seems the 1st test will take 1-3 minutes. The next time the tests are run (in a different order) that same test might take only 0.2 seconds.
rspec -p will show me which examples are slow that round but it doesn't help me debug WHY.
It might be a call to an outside service or, perhaps some external volume that's timing out because it's no longer attached to this machine, but I can't think of what or where the reference might be... other teammates testing same codebase don't have this problem.
Thoughts?
Edit (adding image of profiling results)
Does this help? I have no idea what cycle 5 means...

Related

Browse a Rails App with the DB in Sandbox Mode?

I'm writing a lot of request specs right now, and I'm spending a lot of time building up factories. It's pretty cumbersome to make a change to a factory, run the specs, and see if I forgot about any major dependencies in my data. Over and over and over...
It makes me want to set up some sort of sandboxed environment, where I could browse the site and refresh the database from my factories at will. Has anyone ever done this?
EDIT:
I'm running spork and rspec-guard to make this easier, but I still lose a lot of time.
A large part of that time is spent waiting for Capybara/FireFox to spin up. These are request specs, and quite often there are some JavaScript components that need to be exercised as well.
You might look at a couple of solutions first:
You can run specific test files rather than the whole suite with something like rspec spec/request/foo_spec.rb. You can run a specific test with the -e option, or by appending :lineno to the filename, where lineno is the line number the test starts on.
You can use something like guard-rspec, watchr, or autotest to automatically run tests when their associated files change.
Tools like spork and zeus can be used to preload the app environment so that test suite runs take less time to run. However, I don't think this will reload factories so they may not apply here.
See this answer for ways that you can improve Rails' boot time. This makes running the suite substantially less painful.

Are there common reasons why Cucumber tests fail 60% of the time on otherwise passing functional code?

I recently started working on a project that has all passing cucumber tests. But I would say 60% of the time they fail on Timeouts, or just all together random intermittent errors. So roughly 1/4 times everything will pass and be green.
Are there common reasons for this sort of intermittence? Should I be concerned?
Acceptance tests may be something tricky on most of time.
You gotta check the async part of your code (Long database transactions, Ajax, MessageQueues). Put some timeout that makes sense for you, for the tests and for the build time (a long build time is not pretty good. I think that 10 minutes is acceptable, more than that, you can review your tests, if they are good enough).
Other problem is browser (if you're using it), it can take a lot of time to warm-up and start all tests.

Is spork worth the hassle?

I have spent hours and hours trying to configure spork so that it works for RSpec, works for Cucumber, reloads models so that it doesn't have to be restarted all the time and doesn't throw errors.
I've spent so much time researching solutions to its quirks that I might as well just have waited for the regular tests to load. Added to all of that it has the annoying characteristic that when I'm debugging I type commands into the terminal window I called Rspec from but the output gets displayed in the terminal window Spork is running in. Eesh.
I'm hugely appreciative of any piece of software that is produced for the help of others and of the spork project but just can't figure out whether it's worth labouring through further.
EDIT
YES - SPORK IS DEFINITELY WORTH THE EFFORT. After 4 days of setup I finally managed to sort out all of the issues and it's speeded up my testing incredibly. I really thoroughly recommend it.
I found out that Spork seems to work mostly OK if you follow the TDD/BDD pattern - that is, you write your test first, let it fail, and only then write the code. However, I don't always work this way - there are many situations where I need to write the code before writing the tests.
Fortunately, I found a nearly ideal solution to my testing needs - the Spin gem. It doesn't force you into into any workflow, and just works.
Give my CoreApp ago - it's a complete config of RSpec/Spork/Guard/Cucumber.
I find it's worthwhile considering it speeds up mosts test but the disadvantage then is my tests aren't engineered to be 'efficient' themselves. Some believe it's better to wait for the environment to load each time, but on my MBP it takes over 10-15 secs for the env to reload.
https://github.com/bsodmike/CoreApp

Autotest performance slowdown

I've been using ZenTest to run all the tests in my Rails project for years and it's always been quite nippy. However, on my Mac it has suddenly started taking 3 times as long to run all the tests. We have 1219 tests and for the past year it would run all the tests in around 300 seconds on average. Now though, it's taking almost 900 seconds:
Finished in 861.3578 seconds.
1219 tests, 8167 assertions, 0 failures, 0 errors
==============================================================================
I can't think of any reason why such a slowdown would occur. I've tried updating to the latest gem version, reducing the log output from the tests and regenerating the test database, all to no avail. Can anyone suggest a way to improve the performance?
When you have eliminated the impossible, whatever remains, however improbable, must be the explanation: if it's not the gem, not the database (did you check indexes ?), not your Mac, not Rails (did you upgrade recently), could it be the code ?
I'd check git/svn/cvs logs for the few most recent changes you made, and look for anything that might e.g. be slowing down queries.
If you can't find anything right away, profile the code to see where the time is going. This will be slower than just remembering something you did change (which almost always turns out to be the explanation in this kind of situation), but might point you in the right direction.
Performance issues can be frustrating because any number of factors can have an impact. A missing index on the DB. Network latency. Low memory conditions. Don't give up, keep Tilton's Law in mind.
You are really going to do a little bit more homework here, I doubt its ZenTest:
Grab a version of your code when stuff was great and dandy a few months ago. Run all the tests, output all the test durations to a spreadsheet or something.
Grab a current version of your code base and repeat the process in 1)
If durations are the same, something about your DB configuration or machine config has changed
If all the tests are slower on average this is a hard one to diagnose, but it would seem that there is a new bit of code thats running in each test.
If a handful of new tests are really slow, fix them.
So I finally solved this. Here's how in three easy steps:
Insert OSX Leopard CD
Completely reinstall Leopard from scratch
Reinstall Ruby, MySQL etc
After doing this the tests run in under 260 seconds.
I have no idea what happened but it certainly seems to have been a MySQL issue somewhere.

Speed of running a test suite in Rails

I have 357 tests (534 assertions) for my app (using Shoulda). The whole test suite runs in around 80 seconds. Is this time OK? I'm just curious, since this is one of my first apps where I write tests extensively. No fancy stuff in my app.
Btw.: I tried to use in memory sqlite3 database, but the results were surprisingly worse (around 83 seconds). Any clues here?
I'm using Macbook with 2GB of RAM and 2GHz Intel Core Duo processor as my development machine.
I don't feel this question is rails specific, so I'll chime in.
The main thing about testing is that it should be fast enough for you to run them a lot (as in, all the time). Also, you may wish to split your tests into a few different sets, specifically things like 'long running tests' and 'unit tests'.
One last option to consider, if your database setup is time consuming, would be to create your domain by restoring from a backup, rather than doing a whole bunch of inserts.
Good luck!
You should try this method https://github.com/dchelimsky/rspec/wiki/spork---autospec-==-pure-bdd-joy- using spork to spin up a couple of processes that stay running and batch out your tests. I found it to be pretty quick.
It really depends on what your tests are doing. Test code can be written efficiently or not in exactly the same way as any other code can.
One obvious optimisation in many cases is to write your test code in such a way that everything (or as much as possible) is done in memory, as opposed to many read/writes to the database. However, you may have to change your application code to have the right interfaces to achieve this.
Large test suites can take some time to run.
I generally use "autospec -f" when developing, this only runs the specs that have changed since the last run - makes it much more efficient to keep your tests running.
Of course, if you are really serious, you will run a Continuous Integration setup like Cruise Control - this will automate your build process and run in the background, checking out your latest building and running the suite.
If you're looking to speed up the runtime of your test suite, then I'd use a test server such as this one from Roman Le NĂ©grate.
You can experiment with preloading fixtures, but it will be harder to maintain, and, IMHO, not worth it's speed improvements (20% maximum I think, but it depends)
It's known that SQLite is slower than mysql/pgsql, excepting very small, tiny DBs.
As someone already said, you can put mysql (or other DB) datafiles on some kind of RAMDisk (I use tmpfs on linux).
PS: we have 1319 Rspec examples now, and it runs for 230 seconds on C2D-3Ghz-4GRam, and I think it's fine. So, yours is fine too.
As opposite to in-memory SQLite, you can put a MySQL database on RAMDISK (on Windows) or on tmpfs on Linux.
MySQL has a very efficient buffering, so putting database in memory does not help a lot until you update a lot of data really often.
More significant is the way of test isolation and data preparation for each test.
You can use transactional fixtures. That means that each test will be wrapped into transaction and thus next test will start at the initial point.
This is faster than cleaning up the database before each test.
There are situations when you want to use both transactions and explicit data erasing, here is a good article about it: http://www.3hv.co.uk/blog/2009/05/08/switching-off-transactions-for-a-single-spec-when-using-rspec/

Resources