Automated testing with Ruby on Rails - best practices - ruby-on-rails

Curious, what are you folks doing in as far as automating your unit tests with ruby on rails? Do you create a script that run a rake job in cron and have it mail you results? a pre-commit hook in git? just manual invokation? I understand tests completely, but wondering what are best practices to catch errors before they happen. Let's take for granted that the tests themselves are flawless and work as they should. What's the next step to make sure they reach you with the potentially detrimental results at the right time?

Not sure about what exactly do you want to hear, but there are couple of levels of automated codebase control:
While working on a feature, you can use something like autotest to get an instant feedback on what's working and what's not.
To make sure that your commits really don't break anything use continuous integration server like cruisecontrolrb or Integrity (you can bind these to post-commit hooks in your SCM system).
Use some kind of exception notification system to catch all the unexpected errors that might pop up in production.
To get some more general view of what happened (what was user doing when the exception occured) you can use something like Rackamole.
Hope that helps.

If you are developing with a team, the best practice is to set up a continuous integration server. To start, you can run this on any developers machine. But in general its nice to have a dedicated box so that its always up, is fast, and doesn't disturb a developer. You can usually start out with someone's old desktop, but at some point you may want it to be one of the faster machines so that you get immediate response from tests.
I've used cruise control, bamboo and teamcity and they all work fine. In general the less you pay, the more time you'll spend setting it up. I got lucky and did a full bamboo set up in less than an hour (once)-- expect to spend at least a couple hours the first time through.
Most of these tools will notify you in some way. The baseline is an email, but many offer IM, IRC, RSS, SMS (among others).

Related

Automate testing of process flows in Rails

I am building a educational service, and it is a rather process heavy application. I have a lot of user actions triggering various actions, some present, some future.
For example, when a student completes a lesson for his day, the following should happen:
updating progress count for his user-module record
checking if he has completed a particular module and progressing him to the next one (which in turn triggers more actions)
triggering current emails to other users
triggering FUTURE emails to himself (ongoing lesson plans)
creating range of other objects (grading todos by teachers)
any other special case events
All these triggers are built into the observers of various objects, and the execution delayed using Sidekiq.
What is killing me is the testing, and the paranoia that I might breaking something whenever I push something. In the past, I do a lot of assertion and validations checks, and they were sufficient. For this project, I think this is not enough, given the elevated complexity.
As such, I would like to implement a testing framework, but after reading through the various options (Rspec, Cucumber), it is not immediately clear what I should be investing my effort into, given my rather specific needs, especially for the observers and scheduled events.
Any advice and tips on what approach and framework would be the most appropriate? Would probably save my ass in the very near future ;)
Not that it matters, but I am using Rails 3.2 / Mongoid. Happy to upgrade if it works.
Testing can be a very subjective topic, with different approaches depending on the problems at hand.
I would say that given your primary need for testing end-to-end processes (often referred to as acceptance testing), you should definitely checkout something like cucumber or steak. Both allow you to drive a headless browser and run through your processes. This kind of testing will catch any big show stoppers and allow you to modify the system and be notified of breaks caused by your changes.
Unit testing, although very important, and should always be used in parallel with acceptance tests, isn't for doing end-to-end testing, Its primarily for testing the output of specific methods in isolation
A common pattern to use is called Test Driven Development (TDD). In this, you write your acceptance tests first, in the "outer" test loop, and then code your app with Unit tests as part of the "inner" test loop. The idea being, when you've finished the code in the inner loop, then the outer loop should also pass, and you should have built up enough test coverage to have confidence that any future changes to the code will either pass/fail the test depending on if the original requirements are still met.
And lastly, a test suite is something that should grow and change as your app does. You may find that whole sections of your test suite can (and maybe should) be rewritten depending on how the requirements of the system change.
Unit Testing is a must. you can use Rspec or TestUnit for that. It will give you atleast 80% confidence.
Enable "render views" for controller specs. You will catch syntax errors and simple logical errors faster that way.There are ways to test sidekiq jobs. Have a look at this.
Once you are confident that you have enough unit tests, you can start looking into using cucumber/capybara or rspec/capybara for feature testing.

Browse a Rails App with the DB in Sandbox Mode?

I'm writing a lot of request specs right now, and I'm spending a lot of time building up factories. It's pretty cumbersome to make a change to a factory, run the specs, and see if I forgot about any major dependencies in my data. Over and over and over...
It makes me want to set up some sort of sandboxed environment, where I could browse the site and refresh the database from my factories at will. Has anyone ever done this?
EDIT:
I'm running spork and rspec-guard to make this easier, but I still lose a lot of time.
A large part of that time is spent waiting for Capybara/FireFox to spin up. These are request specs, and quite often there are some JavaScript components that need to be exercised as well.
You might look at a couple of solutions first:
You can run specific test files rather than the whole suite with something like rspec spec/request/foo_spec.rb. You can run a specific test with the -e option, or by appending :lineno to the filename, where lineno is the line number the test starts on.
You can use something like guard-rspec, watchr, or autotest to automatically run tests when their associated files change.
Tools like spork and zeus can be used to preload the app environment so that test suite runs take less time to run. However, I don't think this will reload factories so they may not apply here.
See this answer for ways that you can improve Rails' boot time. This makes running the suite substantially less painful.

JRuby-friendly method for parallel-testing Rails app

I am looking for a system to parallelise a large suite of tests in a Ruby on Rails app (using rspec, cucumber) that works using JRuby. Cucumber is actually not too bad, but the full rSpec suite currently takes nearly 20 minutes to run.
The systems I can find (hydra, parallel-test) look like they use forking, which isn't the ideal solution for the JRuby environment.
We don't have a good answer for this kind of application right now. Just recently I worked on a fork of spork that allows you to keep a process running and re-run specs or features in it, provided you're using an app framework that supports code reloading (like Rails). Take a look at the jrubyhub application for an example of how I use Spork.
You might be able to spawn a spork instance for your application and then send multiple, threaded requests to it to run different specs. But then you're relying on RSpec internals to be thread-safe, and unfortunately I'm pretty sure they're not.
Maybe you could take my code as a starting point and build a cluster of spork instances, and then have a client that can distribute your test suite across them. It's not going to save memory and will still take a long time to start up, but if you start them all once and just re-use them for repeated runs, you might make some gains in efficiency.
Feel free to stop by user#jruby.codehaus.org or #jruby on freenode for more ideas.

Tools to assist managing the application promotion process in an enterprise environment

I am curious on how others manage code promotion from DEV to TEST to PROD within an enterprise.
What tools or processes do you use to manage the "red tape", entry/exit criteria side of things?
My current organisation is half stuck between some custom online forms type functionality and paper based dependencies to submit documents, gather approvals and reviews.
All this is left in the project managers hands to track what has been submitted, passed review, approved and advise management if there are any roadblocks that may need approval to be "overlooked" before an application can be promoted to the next environment.
A browser based application would be ideal... so whats out there? please show me that you googlefu is better than mine.
It's hard to find one that's good via google. There is a vast array of tools out there for issue management so I'll mention what we use and what we woudl like to use.
We currently use serena products. They have worked well for us in the past. Team Track is our issue management and handles the life cycle of any issue we work on. Version Manager is our source control and has the feature of implementing promotional groups like DEV TEST And PROD. We use DEV, TSTAGE, TEST, PSTAGE and PROD to signify the movement from one to the other, but it's much the same. The two products integrate nicely so that the source associated with the issues is linked, but we have no build process setup in this environment. It's expensive, but it works well.
We are looking ot move to a more common system using Jira for issue management, Subversion for source control, Fisheye to link the two together and Cruise Control for build management. This is less expensive, totaling a few thousand for an enterprise lisence and provides all the same features but with the added bonus of SVN which is a very nice code version mangager.
I hope that helps.
There are a few different scenarios that I've experienced over the years:
Dev -> Test : There is usually a code freeze date that stops work on new features and gets a test environment the code that has been tagged/labelled/archived that gets built. This then gets copied onto the machines and the tests go fine. This is also usually the least detailed of any push.
Test->Prod : This requires the minor change that production has to go down which can mean that a "gone fishing" page goes up or IIS doesn'thave any sites running and the code is copied over again. There are special cases to this where a load balancer can act as a switch so that the promotion happens and none of the customers experience any down time as the ones on the older server will move once their session ends.
To elaborate on that switch idea, the set up is to have 2 potentially live servers with just one server taking requests that the load balancer just sends all the traffic to one machine that can be switched when the other server has the updated code to go live.
There can also be a staging environment which is between test and production where the process is similar in terms of there is a set date when the promotion happens.
Where I used to work there would be merge days where a developer spent most of a day in Perforce merging code so that it could be promoted from one environment to another.
Now there are a couple of cases where this isn't used:
"Hotfixes" or "Hot patches" would occur where I used to work and in this case the specific files were copied up into the staging and production environments on its own since the code change had to get into Production ASAP since something broke in production or some new thing that had to get done that takes 2 minutes gets done. In this case, the code change getting pushed in had to be reviewed and approved before going out.
Those are the different approaches I've seen used where generally there are schedules and timelines potentially have to be changed or additional resources brought in to make a hard date like if a conference is on a particular weekend that such and such is ready for that.
Of course in a few places there has been the, "Oh, was that broken? Let me see..." and a few minutes later, "No, see it isn't broken for me," where someone changed things without asking permission or anything where a company still has what they call "cowboy programming."
Another point is the scale of the release:
1) Tiny - This is the case where one web page goes up so that user X can do Y.
2) Small - A handful or so of files that isn't really complicated but isn't exactly trivial.
3) Medium - Where going from one environment to another requires changing a bunch of files and usually has scripts to move.
4) Big - Where there are scheduled promotions and various developers are asked for who is taking which shifts when the live push is done. I had this in a case where there was a data migration to do in addition to a release of some new e-commerce sites.
5) Mammoth - Where everything is brand new including how this would be used. I don't think I've ever seen one of this size but I'd imagine Microsoft or Google would have releases of this size.
Somewhere in that spectrum most releases fall and so how much planning and preparation can vary quite a bit and let's not forget that regulatory compliance can be its own pain in getting some things done.

Quality Control / Log Monitoring

One of the articles I really enjoyed reading recently was Quality Control by Last.FM. In the spirit of this article, I was wondering if anyone else had favorite monitoring setups for web type applications. Or maybe if you don't believe in Log Monitoring, why?
I'm looking for a mix of opinion slash experience here I guess.
We get a bunch of email/pager alerts from an older host/app/network monitoring environment that get gradually more abusive depending on severity of the problem/time taken to respond. Fortunately we all have thick skins and very broad senses of humour. :)
We use log4net, and normally write both to log files and the database. However, when we've been tracking down a particularly difficult problem, we've enabled the email appender, so that critical log messages went straight to a developer's email account. This allowed us to figure out what was happening more immediately.
In addition, our infrastructure team has several tools they use to monitor system uptime, event logs, etc., to give them early warning when something is about to go down. We've also helped them implement custom monitoring scripts that test specific functionality of our code.

Resources