My RSpec examples were running respectably until I migrated to a new Mac. Now, it seems the 1st test will take 1-3 minutes. The next time the tests are run (in a different order) that same test might take only 0.2 seconds.
rspec -p will show me which examples are slow that round but it doesn't help me debug WHY.
It might be a call to an outside service or, perhaps some external volume that's timing out because it's no longer attached to this machine, but I can't think of what or where the reference might be... other teammates testing same codebase don't have this problem.
Thoughts?
Edit (adding image of profiling results)
Does this help? I have no idea what cycle 5 means...
We have a service to pick up custom tests in XML and convert those to CodedUI tests. We then start a process for MSTest to load the tests into the Test Controller which then distributes the tests across various Agents. We run regression tests at night, so no one is around to fix a system if something goes wrong. When certain exceptions occur in the test program, it pops open an error window and no more test can run on the system. Subsequent tests are loaded into the agent and fail immediately because they can not perform their assigned tasks. Thousands of tests that should take all night on multiple systems now fail in minutes.
We can detect that an error occurred by how quickly a test is returned, but we don't know how to disable the agent so as not to allow it to pick up any more tests.
addendum:
If the test has failed so miserably that no more tests can attempt a successful run (as noted, we may not have an action to handle some, likely new, popup), then we want to disable that agent as no more tests need to run on it: they will all fail. As we have many agents running concurrently, if one fails (and gets disabled), the load can still be distributed without a long string of failures. These other regression tests can still have a chance to succeed (everything works) or fail (did we miss another popup, or is this an actual regression failure).
2000 failures in 20 seconds doesn't say anything except 1 system had an problem that no one realized it would have and now we wasted a whole night of tests. 2 failures (1 natural, 1 caused by issue from previous failure) and 1 system down means the total nights run might be extended by an hour or two and we have useful data on how to start the day: fix 1 test and rerun both failures.
One would need to abort the testrun in that case. If you are running mstest yourself, you would need to inject a ^c into the command line process. But: if no-one is around to fix it, why does it matter that the consequenting test fail ? if its just to see which test was the cause of the error quickly, why not generate a code ui check to see if the message box is there and mark the test inconclusive with Assert.inconclusive. The causing test would stand out like a flag.
If you can detect the point at which you want to disable the agent then you can disable the agent by running the "TestAgentConfig.exe delete" which will reset the agent to unconfigured state.
I recently started working on a project that has all passing cucumber tests. But I would say 60% of the time they fail on Timeouts, or just all together random intermittent errors. So roughly 1/4 times everything will pass and be green.
Are there common reasons for this sort of intermittence? Should I be concerned?
Acceptance tests may be something tricky on most of time.
You gotta check the async part of your code (Long database transactions, Ajax, MessageQueues). Put some timeout that makes sense for you, for the tests and for the build time (a long build time is not pretty good. I think that 10 minutes is acceptable, more than that, you can review your tests, if they are good enough).
Other problem is browser (if you're using it), it can take a lot of time to warm-up and start all tests.
I have a couple of load test scenarios composed of several web tests in Visual Studio 2008 Ultimate. By default these scenarios are run concurrently - is there some way to run them sequentially, such that the first test runs for some duration, then the second scenario runs for the same duration?
The most straight forward approach would be to create a coded web test that yields a different set of tests after a certain time.
Though this approach would be a bit labour intensive in re-arranging the code to do what you want.
I have to admit some curiosity as to why you would want to do this?
I have 357 tests (534 assertions) for my app (using Shoulda). The whole test suite runs in around 80 seconds. Is this time OK? I'm just curious, since this is one of my first apps where I write tests extensively. No fancy stuff in my app.
Btw.: I tried to use in memory sqlite3 database, but the results were surprisingly worse (around 83 seconds). Any clues here?
I'm using Macbook with 2GB of RAM and 2GHz Intel Core Duo processor as my development machine.
I don't feel this question is rails specific, so I'll chime in.
The main thing about testing is that it should be fast enough for you to run them a lot (as in, all the time). Also, you may wish to split your tests into a few different sets, specifically things like 'long running tests' and 'unit tests'.
One last option to consider, if your database setup is time consuming, would be to create your domain by restoring from a backup, rather than doing a whole bunch of inserts.
Good luck!
You should try this method https://github.com/dchelimsky/rspec/wiki/spork---autospec-==-pure-bdd-joy- using spork to spin up a couple of processes that stay running and batch out your tests. I found it to be pretty quick.
It really depends on what your tests are doing. Test code can be written efficiently or not in exactly the same way as any other code can.
One obvious optimisation in many cases is to write your test code in such a way that everything (or as much as possible) is done in memory, as opposed to many read/writes to the database. However, you may have to change your application code to have the right interfaces to achieve this.
Large test suites can take some time to run.
I generally use "autospec -f" when developing, this only runs the specs that have changed since the last run - makes it much more efficient to keep your tests running.
Of course, if you are really serious, you will run a Continuous Integration setup like Cruise Control - this will automate your build process and run in the background, checking out your latest building and running the suite.
If you're looking to speed up the runtime of your test suite, then I'd use a test server such as this one from Roman Le NĂ©grate.
You can experiment with preloading fixtures, but it will be harder to maintain, and, IMHO, not worth it's speed improvements (20% maximum I think, but it depends)
It's known that SQLite is slower than mysql/pgsql, excepting very small, tiny DBs.
As someone already said, you can put mysql (or other DB) datafiles on some kind of RAMDisk (I use tmpfs on linux).
PS: we have 1319 Rspec examples now, and it runs for 230 seconds on C2D-3Ghz-4GRam, and I think it's fine. So, yours is fine too.
As opposite to in-memory SQLite, you can put a MySQL database on RAMDISK (on Windows) or on tmpfs on Linux.
MySQL has a very efficient buffering, so putting database in memory does not help a lot until you update a lot of data really often.
More significant is the way of test isolation and data preparation for each test.
You can use transactional fixtures. That means that each test will be wrapped into transaction and thus next test will start at the initial point.
This is faster than cleaning up the database before each test.
There are situations when you want to use both transactions and explicit data erasing, here is a good article about it: http://www.3hv.co.uk/blog/2009/05/08/switching-off-transactions-for-a-single-spec-when-using-rspec/