Ruby on Rails - Why use tests? - ruby-on-rails

I'm confused about what the various testing appliances in Ruby on Rails are for. I have been using the framework for about 6 months but I've never understood the testing part of it. The only testing I've used is JUnit3 in Java and that only briefly.
Everything I've read about it just shows testing validations. Shouldn't the validations in rails just work? It seems more like testing the framework than testing the your code. Why would you need to test validations?
Furthermore, the tests seem super fragile to any change in your code. So if you change anything in your models, you have to change your tests and fixtures to match. Doesn't this violate the DRY principle?
Third, writing test code seems to take alot of time. Is that normal? Wouldn't it just be faster to refresh my browser and see if it worked? I already have to play with my application just to see if it flows correctly and make sure my CSS hasn't exploded. Why wouldn't manual testing be enough?
I've asked these questions before and I haven't gotten more than "automated testing is automated". I am smart enough to figure out the advantages of automating a task. My problem is that costs of writing tests seem absurdly high compared to the benefits. That said, any detailed response is welcome because I probably missed a benefit or two.

Shouldn't
the validations in rails just work? It
seems more like testing the framework
than testing the your code. Why would
you need to test validations?
The validations in Rails do work -- in fact, there are unit tests in the Rails codebase to ensure it. When you test a model's validation, you're testing the specifics of the validation: the length, the accepted values, etc. You're making sure the code was written as intended. Some validations are simple helpers and you may opt not to test them on the notion that "no one can mess up a validates_numericality_of call." Is that true? Does every developer always remember to write it in the first place? Does every developer never accidentally delete a line on a bad copy paste? In my personal opinion, you don't need to test every last combination of values for a Rails' validation helper, but you need a line to test that it's there with the right values passed, just in case some punk changes it in the future without proper forethought.
Further, other validations are more complex, requiring lots of custom code -- they may warrant more thorough testing.
Furthermore, the tests seem super
fragile to any change in your code. So
if you change anything in your models,
you have to change your tests and
fixtures to match. Doesn't this
violate the DRY principle?
I don't believe it violates DRY. They're communicating (that's what programming is, communication) two very different things. The test says the code should do something. The code says what it actually does. Testing is extremely important when there is a disconnect between those things.
Test code and application code are intimately linked, obviously. I think of them as two sides of a coin. You wouldn't want a front without a back, or a back without a front. Good test code reinforces good application code, and vice versa. The two together are used to understand the whole problem that you're trying to solve. And well written test code is documentation -- it shows how the application code should be used.
Third, writing test code seems to take
alot of time. Is that normal? Wouldn't
it just be faster to refresh my
browser and see if it worked? I
already have to play with my
application just to see if it flows
correctly and make sure my CSS hasn't
exploded. Why wouldn't manual testing
be enough?
You've only worked on very small projects, for which that testing is arguably sufficient. However, when you work on a project with several developers, thousands or tens of thousands of lines of code, integration points with web services, third party libraries, multiple databases, months of development and requirements changes, etc, there are a lot of other factors in play. Manual testing is simply not enough. In a project of any real complexity, changes in one place can often have unforeseen results in others. Proper architecture helps mitigate this problem, but automated testing helps as well (and helps identify points where the architecture can be improved) by identifying when a change in one place breaks another.
My problem is that
costs of writing tests seem absurdly
high compared to the benefits. That
said, any detailed response is welcome
because I probably missed a benefit or
two.
I'll list a few more benefits.
If you test first (Test Driven Development) your code will probably be better. I haven't met a programmer who gave it a solid shot for whom this wasn't the case. Testing first forces you to think about the problem and actually design your solution, versus hacking it out. Further, it forces you to understand the problem domain well enough to where if you do have to hack it out, you know your code works within the limitations you've defined.
If you have full test coverage, you can refactor with NO RISK. If a software problem is very complicated (again, real world projects that last for months tend to be complicated) then you may wish to simplify code that has previously been written. So, you can write new code to replace the old code, and if it passes all of your tests, you're done. It does exactly what the old code did with respect to the tests. For a project that plans to use an agile development method, refactoring is absolutely essential. Changes will always need to be made.
To sum up, automated testing, especially test driven development, is basically a method of managing the complexity of software development. If your project isn't very complex, the cost may outweigh the benefits (although I doubt it). However, real world projects tend to be very complex, and the results of testing and TDD speak for themselves: they work.
(If you're curious, I find Dan North's article on Behavior Driven Development to be very helpful in understanding a lot of the value in testing: http://dannorth.net/introducing-bdd)

I haven't really used Rails much, but I would think that these automated tests would be useful as smoke tests to be sure that the thing you just did doesn't break something that you did last week. This will become increasingly important as your project grows.
Also, writing the tests before you write the code (using the Test-Driven-Development model) will help you write the code better and faster, since the tests force you to fully think the problem through. It will also help you to know where to break up complex methods into smaller methods that you can test individually.
You are right, writing and maintaining tests takes a lot of time. Sometimes more time than the code itself. However, it can save you time in bug fixing and refactoring for the reasons above.

Tests should validate your application logic. Personally, I think my most important tests are the ones I run in Selenium. They check that what shows up in the browser is actually what I expect to see. However, if that's all I had, then I would find it hard to debug - it helps to have lower level tests as well and integration, functional, and unit tests are all useful tools. Unit tests let you check that the model behaves the way you expect it to (and that means every method, not just validatins). Validatins will certainly Just Work, but only if you get them right. If you get them wrong, they will Just Work, but not the way you expected. Writing a couple of lines of test is quicker than debugging later on.
A simple example like the one at http://wiseheartdesign.com/2006/01/16/testing-rails-validations just checks validations in a unit test. The O'Reilly article at http://www.oreillynet.com/pub/a/ruby/2007/06/07/rails-testing-not-just-for-the-paranoid.html?page=1 is a bit more complete (though still fairly basic).
Automated testing is particularly useful in regression testing where you change something and run a suite of tests to check that you didn't break anything else.
Tests are a form of repetition, but they don't violate DRY because they express things in a different way. A test says "I did X so Y should happen". Code says "X happened, so now I need to do Z, which happens to cause Y to happen". i.e. a test stimulates a cause and checks an effect, while code responds to a cause, and effects something.

A lot of the testing tutorials and the sample tests created by the Rails generators are pretty lame and IMHO that can give the mistaken impression that you're supposed to test stupid stuff like the built in Rails methods, etc.
Since Rails has it's own test suite, there's no point in you writing or running tests that only test built in Rails functionality. Your tests should exercise the code you're writing! :-)
As for the relative merit of running tests vs just refreshing in your browser.. The larger your app gets, the more of a pain in the ass it is to have to manually run through numerous scenarios and edge cases to make sure nothing in your application has broken. Eventually, you'll stop testing your entire application after each change and just start "spot testing" the areas you think should have been affected. Inevitably, you'll find something that used to work months ago that is now completely broken, and you have no certainty when it broke or which changes broke it. After that happens enough times... you'll come to value automated testing.... :-)

For example:
I work on a 25000+ lines project (yes, in rails 1.2) and last monday I was told if I could make Users dissapear from every list except admin ones if they had "leave_date" attribute set to the past.
You can rewrite every list action (50+) to put a
#users.reject!{|u| Date.today > u.leave_date}
Or you can override the "find" method (DRY;-), but only if you have tests (on everything that finds users!) you will know you didn't break anything by overriding User#find !!

Everything I've read about it just shows testing validations. Shouldn't the validations in rails just work? It seems more like testing the framework than testing the your code. Why would you need to test validations?
There's a good Railscast showing one way to test controllers.

Related

Is there any advantage in doing unit testing along with acceptance testing?

Suppose you are building an API in Rails. Is it enough if we write request specs alone without the model specs, controller specs and the view specs? Why do we need unit testing if we do acceptance and functional testing or feature testing for front end web projects. I insisted on doing unit testing as it allows you to write decoupled code but my colleague is against it. What are the best practices on this in the ruby on rails community?
If you only have time to do one type of testing, and you are writing an API, then it might make sense to only do feature testing by simply calling your API endpoints. After all, it's pretty important that those endpoints return the expected results.
However, when your feature tests start breaking, you will potentially have a terrible time figuring out the source of the problems without unit tests. Is there a core piece of your software that most of your endpoints are using? Good luck refactoring that without a robust set of unit tests.
But that really speaks to what you have to figure out -- is your core set pretty stable? Are you really just adding features or new endpoints? If so, you can probably get away with a heavy feature test approach.
Rails and testing go together like peas and carrots. Here is a great resource that highlights (better than I ever could) the importance of using tests (of all degrees) in your Rails projects. Hope this helps!
Not sure about "the community", but in my opinion it depends on the complexity of the project. If it's a very straightforward API project, doing only feature tests may be fine.
But if the project becomes larger, unit tests allow you to better pinpoint errors in case anything breaks. I.e. you don't see "there's a bug somewhere in feature X" but "this or that class does not work when invoking a particular method with specific arguments".

What are all the pieces to an effective TDD strategy?

I'm really getting frustrated with learning how to properly develop software using TDD. It seems that everyone does it differently and in a different order. At this point, I'd just like to know what are all the considerations? This much is what I've come up with: I should use rspec, and capybara. With that said, what are all the different types of test I need to write, to have a well built and tested application. I'm looking for a list that comprises the area of my application being tested, the framework needed to test it, and any dependencies.
For example, it seems that people advise to start by unit testing your models, but when I watch tutorials on TDD it seems like they only write integration test. Am I missing something?
Well, the theme "how do you TDD" is as much out there in the open as the theme "how do you properly test?". In Ruby, and more specifically in Rails, rspec should be the tool to start with, but not be done with. RSpec allows you to write Unit Tests for your components, to test them separately. In the Rails context, that means:
test your models
test your controllers
test your views
test your helpers
test your routes
It is a very good tool not exactly rails-bound, it is also used to test other frameworks.
After you're done with RSpec, you should jump to cucumber. Cucumber (http://cukes.info/) is the most used tool (again, for the Rails environment) to write integration tests. You can then integrate capybara on cucumber.
After you're done with cucumber, you'll be done with having tested your application backend and (part of) its HTML output. That's when you should also test your javascript code. How to do that? First, you'll have to Unit test it. Jasmine (http://pivotal.github.com/jasmine/) is one of the tools you might use for the job.
Then you'll have to test its integration in your structure. How to do that? You'll come back to cucumber and integrate selenium (http://seleniumhq.org/) with your cucumber framework, and you'll be able to test your integration "live" in the browser, having access to your javascript magic and testing it on the spot.
So, after you're done with these steps, you'll have covered most of the necessary steps to have a well-integrated test environment. Are we done? Not really. You should also set a coverage tool (one available: https://github.com/colszowka/simplecov) to check if your code is being really well tested and no loose ends are left.
After you're done with these morose steps, you should also do one last thing, in case you are not developing it all alone and the team is big enough to make it still unmanageable by itself: you'll set a test server, which will do nothing other than run all the previous steps regularly and deliver notifications about its results.
So, all of this sets a good TDD environment for the interested developer. I only named the most used frameworks in the ruby/rails community for the different types of testing, but that doesn't mean there aren't other frameworks as or more suitable for your job. It still doesn't teach you how to test properly. For that there's more theory involved, and a lot of subdebates.
In case I forgot something, please write it in a comment below.
Besides that, you should approach how you test properly. Namely, are you going for the declarative or imperative approach?
Start simple and add more tools and techniques as you need them. There are many way to TDD an app because every app is different. One way to do that is to start with an end-to-end test with Rspec and Capybara (or Cucumber and Capybara) and then add more fine-grained tests as you need them.
You know you need more fine-grained tests when it takes more than a few minutes to make a Capybara test pass.
Also, if the domain of your application is non-trivial it might be more fruitful for you to start testing the domain first.
It depends! Try different approaches and see what works for you.
End-to-end development of real-world applications with TDD is an underdocumented activity indeed. It's true that you'll mostly find schoolbook examples, katas and theoretical articles out there. However, a few books take a more comprehensive and practical approach to TDD - GOOS for instance (highly recommended), and, to a lesser extent, Beck's Test Driven Development by Example, although they don't address RoR specifically.
The approach described in GOOS starts with writing end-to-end acceptance tests (integration tests, which may amount to RSpec tests in your case) but within that loop, you code as many TDD unit tests as you need to design your lower-level objects. When writing those you can basically start where you want -from the outer layers, the inner layers or just the parts of your application that are most convenient to you. As long as you mock out any dependency, they'll remain unit tests anyway.
I also have the same question when I started learning rails, there're so many tools or methods to make the test better but after spending to much time on that, I finally realized that you could simply forget the rule that you must do something or not, test something that you think it might have problem first, then somewhere else. Well ,it needs time.
that's just my point of view.

How to manage a project on ruby on rails 2.3?

I have a large ruby on rails 2.3 which was now a disaster because of the slowness and many bugs. I'm the only programmer and every day I've done debugging and tearing my hair off because of this. The users are already using the product but so many bugs and data are scattered.
I was employed without prior knowledge of project development and management. Now I'm suffering of having more overtime and a crisis on my codes to be fixed.
And also I've created this app while learning rails so there are codes there that became stranger to me.
What should I do?
What are your suggestions?
What books do I need to read about more?
Please I need some help.
Thanks.
I would love to recommend you upgrade to Rails 3. Especially since there are many newer features and some things are simplified, and it would ease future maintainability.
However, unfortunately, I am hesitant to (or rather simply cannot) actually recommend that given that you already have much more on your hands.
In this case, the best thing you can do is to start writing tests. If there are so many bugs, I have to assume that either you have no tests or your have an incomplete test suite. Tests will help to give you confidence that you do not break anything when you try to fix something else.
The default rails test framework can be found at the Ruby on Rails Guides. Having said that, many people prefer the RSpec testing framework. There are indeed shortcomings of the default Rails testing framework (notably the fragility of fixtures - try get a factory gem, and other features such as mocks and expectations, and nested contexts).
You should read up on the testing frameworks, and maybe try it a bit. Pick one testing framework early on however, and start testing everything!
Perhaps when you become more confident in your test suite and have fixed the most important bugs, you should think more about a path to upgrading Rails - because all the gems will march on, and gradually drop support for Rails 2.3, which means you will be using increasingly old gems which may no be well supported anymore.
From what I understood, you are asking for project managment tips and tools how to get a rails project under control.
I believe first thing you need to is stabilize the project. To do this, you will need to minimize the bugs and chart the required work.
I see two complementary approaches for this:
use a task/bug tracking tool
start using cucumber for testing
Task/bug tracking
This is very important, because you will need some kind of list that itemizes all bugs.
Sometimes users discover a bug, and suddenly you have to drop everything, because at that moment, that single bug is the most important bug ever, and needs to be solved immediately.
However, if you would ask them outright if this means the bugs you are fixing are more or less important, the answer could be different.
So it is in your advantage if there is a clear way to let the user participate in that decision process. If there is a shared bug-list, users can also follow the current state (what you are working on), they can indicate/choose which bugs are more important for them.
Secondly: having a list of items(work/tasks/outstanding bugs/...) will also help you planning the work.
There are a lot of options to some kind of bug-tracking, but some easy/pragmatic/free suggestions are
checkout trello
use the issues from github
Tracking the bugs/tasks will give you the feeling you gain control of your project, and furthermore: it will make this also more visible to your client.
Cucumber
When fixing bugs there is always the danger to introduce new bugs, definitely in a project that is originally not your own.
In a project where there are next to no tests, I always propose to start with cucumber. Cucumber has a few advantages:
it tests your application/website from the outside in: no need to understand the code fully, you just need to know what the application should do. If I click this link, it should take me to that page.
it is really easy to write tests in cucumber, and you get test-coverage really quickly
as a bonus, your test-code is readable, which you could show your clients/users, and they would actually understand what is covered by the tests (and could correct/improve it).
Upgrading or not?
I personally believe your first step should be stabilizing the project and minimize/remove all bugs. Whilst upgrading to rails 3 would be a huge improvement, it is not a straightforward process. There are good guidelines, but if you do it now, you will have no idea if a bug was introduced during the upgrade, or existed before. First get your code quality in order, and then do the upgrade.
Hope this helps.
Actually the thing you are asking is completely depending on how much refactoring you need to clean up the whole project. If you have enough time in your hand to clean it up completely. I would suggest following steps:
Getting Ready
Get the visualization of the whole project. What is required and what is not required.
Define your resources and relation between them properly.
Use proper RESTful routing.
Decide the test tools and frameworks like cucumber, rspec, factory girl etc.
Plan of Action
Decide (if possible, as a team) that what are the minimum or necessary changes required.
Isolate all the components in different groups so that each group can be refactored individually. Smaller groups are preferred.
Decide test cases.
Break down tasks into as much as small size possible.
Make sure to keep your test coverage more than 90%.
This process will take around 4-5 months for a medium scale project for a team of 4 members.
Let us know if you have any specific confusion.
What to read: http://guides.rubyonrails.org/v2.3.11/ Start with chapters about debugging and performance testing.
Forget about upgrading to Rails 3 for now, at this moment it would only introduce many more bugs and probably lots of problems with legacy gems and plugins. And don't forget that you need ruby 1.9 in order to upgrade to rails 3 - yet another batch of problems.
Adding tests is a good idea, as ronalchn suggested. I'd recommend to start with unit tests. You might have to rewrite code a lot in order for it to be testable. (In other words, instead of trying tests to fit current legacy code, it's usually better to refactor the code to make it testable.)
Unfortunately there is no quick fix in your current solution. If you seriously consider an upgrade, it will take time to fill in the missing tests if you haven't been writing them, pick up the testing frameworks on top of Rails 3, and work out the necessary data migration once you are ready to flip the switch.
The other option is to continue with Rails 2.x, which isn't completely unfathomable although the support avenues will be much more limited. You still need to work in the tests as a first priority and understand the various nuances that are present in the existing application.
For your scenario (one-man racket with little to no prior experience), I feel that sticking with what you have and improving the testability up to the point where you would feel an upgrade is worthwhile, would be the prudent action. No matter which course of action you would take, be prepared to put in quite a bit of work in the short-to-middle term (but such is the case of accumulating technical debt).

How thorough should you get with RSpec testing?

I'm just starting to grasp BDD and RSpec and one thing I'm really having trouble with is figuring out how thorough I should be with my testing.
I'm just not understanding how fine-grained my testing should be to still be useful but not double development time.
Is it just a matter of preference? Or is there some general standard for what should be tested?
There's a few factors to consider here.
Spec coverage should be greatest for the most important features and the things most likely to break.
Specs should express developer intention. Don't write specs for trivial things you or someone else might change later.
Test behavior not implementation. You should be able to change the internal implementation of a class and still pass the specs. This makes refactoring easier.
Any time you fix bugs add specs to prevent the bug from regressing.
I find it helpful to think of Tests/Specs as a safety net. I want a spec to fail if another dev (or myself) breaks something that I spent time to make work. I don't want them to have to spend a lot of time changing specs for things that don't really matter. I also don't want to inhibit them from improving the application because of specs that test things that weren't important to me when writing the code.

Best way to add tests to an existing Rails project?

I have a Rails project which I neglected to build tests for (for shame!) and the code base has gotten pretty large. A friend of mine said that RSpec was a pain to use unless you use it from the beginning. Is this true? What would make him say that?
So, considering the available tests suites and the fact that the code base is already there, what would be my best course of action for getting this thing testable? Is it really that much different than doing it from the beginning?
This question came up recently on the RSpec mailing list, and the advice we generally gave was:
Don't bother trying to retro-fit specs to existing, working, code unless you're going to change it - it's exhausting and, unless the code needs to be changed, rather pointless.
Start writing specs for any changes you make from now on. Bug fixes are an especially good opportunity for this.
Try to train yourself into the discipline that before you touch the code, first of all write a failing example (=spec) to drive out the change.
You may find that the design of code which wasn't driven out by code examples or unit tests makes it awkward to write tests or specs for. This is perhaps what your friend was alluding to. You will almost certainly need to learn a few key refactoring techniques to break up dependencies so that you can exercise each class in isolation from your specs. Michael Feathers' excellent book, Working Effectively With Legacy Code has some great material to help you learn this delicate skill.
I'd also encourage you to use the built-in spec:rcov rake task to generate code coverage stats. It's extremely rewarding to watch these numbers go up as you start to get your codebase under test.
Maybe start with the models? They should be testable in isolation, which ought to make them the lowest-hanging fruit.
Then pick a model and start writing tests that say what it does. As you go along, think about other ways to test the code - are there edge cases that maybe you're not sure about? Write the tests and see how the model behaves. As you develop the tests, you may see areas in the code that aren't as clean and de-duplicated (DRY) as they might be. Now you have tests, you can refactor the code, since you know that you're not affecting behaviour. Try not to start improving design until you have tests in place - that way lies madness.
Once you have the models pinned down, move up.
That's one way. Alternatives might be starting with views or controllers, but you may find it easier to start with end-to-end transaction tests andwork your way into smaller and smaller pieces as you go along.
The accepted answer is good advice - although not practical in some instances. I recently was faced with this problem on a few apps of mine because I NEEDED tests for existing code. There simply was no other way around it.
I started off doing all unit tests, then moved onto functionals.
Get in the habit of writing failing tests for any new code, or whenever you're going to change a part of the system. I've found this has helped me gain more knowledge of testing as I go.
Use rcov to measure your progress.
Good luck!
Writing tests for existing code may reveal bugs in your code. These tests will force you to look at the existing code so you can see what test you need to write in order to get it to pass and you may see some code that could possibly be written better, or is now useless.
Another tip is to write a test when you encounter a bug so it should never re-occur, this is called regressional testing.
Retrofitting specs is not inevitably a bad idea. You go from working code to working code with known properties which allows you to understand whether any future change breaks anything. At the moment if you need to make a change how can you know what it will affect?
What people mean when they say that it is hard to add tests/specs to exisitng code is that code which is hard to test is often highly coupled which makes it hard to write low-level isolated tests.
One idea would be to start with full-stack tests using something like the RSpec story runner. You can then work from the 'outside in' isolating what you can in low-level isolated tests and gradually untangle the harder code bit by bit.
You can start writing "characterization tests". With this,you might what to try out the pretentious gem here:
It is still a work in progress though.

Resources