Unit Testing Approach for an Algorithm in Rails model - ruby-on-rails

I am yet to jump into the TDD/BDD group. Trying to make the mental switch. For now I have been writing the business logic before my tests.
In one of my Rails model, I have a complex algorithm implemented. The implementation can be thought of as a couple of nested loops with lot of method calls from the same model.
Most of these methods take a complex hash created initially in the loop, modify it and pass it on to another method further in the loop which then processes it till we arrive at the final answer hash.
How should I go around unit testing my methods?

Best practice would suggest that you test the boundaries rather than the internal method calls.
Testing the inner workings of a class tend to lead to brittle tests that break even though the end result is what you want.
In this regard it would be best to test the input against the expected output and avoid testing how it got to the output.
There is an excellent talk by Sandi Metz on the subject here http://vimeo.com/48106365

Related

Testing: how to focus on behavior instead of implementation without losing speed?

It seems, that there are two totally different approaches to testing, and I would like to cite both of them.
The thing is, that those opinions were stated 5 years ago (2007), and I am interested, what has changed since then and which way should I go.
Brandon Keepers:
The theory is that tests are supposed to be agnostic of the
implementation. This leads to less brittle tests and actually tests
the outcome (or behavior).
With RSpec, I feel like the common approach of completely mocking your
models to test your controllers ends up forcing you to look too much
into the implementation of your controller.
This by itself is not too bad, but the problem is that it peers too
much into the controller to dictate how the model is used. Why does it
matter if my controller calls Thing.new? What if my controller decides
to take the Thing.create! and rescue route? What if my model has a
special initializer method, like Thing.build_with_foo? My spec for
behavior should not fail if I change the implementation.
This problem gets even worse when you have nested resources and are
creating multiple models per controller. Some of my setup methods end
up being 15 or more lines long and VERY fragile.
RSpec’s intention is to completely isolate your controller logic from
your models, which sounds good in theory, but almost runs against the
grain for an integrated stack like Rails. Especially if you practice
the skinny controller/fat model discipline, the amount of logic in the
controller becomes very small, and the setup becomes huge.
So what’s a BDD-wannabe to do? Taking a step back, the behavior that I
really want to test is not that my controller calls Thing.new, but
that given parameters X, it creates a new thing and redirects to it.
David Chelimsky:
It’s all about trade-offs.
The fact that AR chooses inheritance rather than delegation puts us in
a testing bind – we have to be coupled to the database OR we have to
be more intimate with the implementation. We accept this design choice
because we reap benefits in expressiveness and DRY-ness.
In grappling with the dilemma, I chose faster tests at the cost of
slightly more brittle. You’re choosing less brittle tests at the cost
of them running slightly slower. It’s a trade-off either way.
In practice, I run the tests hundreds, if not thousands, of times a
day (I use autotest and take very granular steps) and I change whether
I use “new” or “create” almost never. Also due to granular steps, new
models that appear are quite volatile at first. The valid_thing_attrs
approach minimizes the pain from this a bit, but it still means that
every new required field means that I have to change
valid_thing_attrs.
But if your approach is working for you in practice, then its good! In
fact, I’d strongly recommend that you publish a plugin with generators
that produce the examples the way you like them. I’m sure that a lot
of people would benefit from that.
Ryan Bates:
Out of curiosity, how often do you use mocks in your tests/specs?
Perhaps I'm doing something wrong, but I'm finding it severely
limiting. Since switching to rSpec over a month ago, I've been doing
what they recommend in the docs where the controller and view layers
do not hit the database at all and the models are completely mocked
out. This gives you a nice speed boost and makes some things easier,
but I'm finding the cons of doing this far outweigh the pros. Since
using mocks, my specs have turned into a maintenance nightmare. Specs
are meant to test the behavior, not the implementation. I don't care
if a method was called I just want to make sure the resulting output
is correct. Because mocking makes specs picky about the
implementation, it makes simple refactorings (that don't change the
behavior) impossible to do without having to constantly go back and
"fix" the specs. I'm very opinionated about what a spec/tests should
cover. A test should only break when the app breaks. This is one
reason why I hardly test the view layer because I find it too rigid.
It often leads to tests breaking without the app breaking when
changing little things in the view. I'm finding the same problem with
mocks. On top of all this, I just realized today that mocking/stubbing
a class method (sometimes) sticks around between specs. Specs should
be self contained and not influenced by other specs. This breaks that
rule and leads to tricky bugs. What have I learned from all this? Be
careful where you use mocking. Stubbing is not as bad, but still has
some of the same issues.
I took the past few hours and removed nearly all mocks from my specs.
I also merged the controller and view specs into one using
"integrate_views" in the controller spec. I am also loading all
fixtures for each controller spec so there's some test data to fill
the views. The end result? My specs are shorter, simpler, more
consistent, less rigid, and they test the entire stack together
(model, view, controller) so no bugs can slip through the cracks. I'm
not saying this is the "right" way for everyone. If your project
requires a very strict spec case then it may not be for you, but in my
case this is worlds better than what I had before using mocks. I still
think stubbing is a good solution in a few spots so I'm still doing
that.
I think all three opinions are still completely valid. Ryan and I were struggling with the maintainability of mocking, while David felt the maintenance tradeoff was worth it for the increase in speed.
But these tradeoffs are symptoms of a deeper problem, which David alluded to in 2007: ActiveRecord. The design of ActiveRecord encourages you to create god objects that do too much, know too much about the rest of the system, and have too much surface area. This leads to tests that have too much to test, know too much about the rest of the system, and are either too slow or brittle.
So what's the solution? Separate as much of your application from the framework as possible. Write lots of small classes that model your domain and don't inherit from anything. Each object should have limited surface area (no more than a few methods) and explicit dependencies passed in through the constructor.
With this approach, I've only been writing two types of tests: isolated unit tests, and full-stack system tests. In the isolation tests, I mock or stub everything that is not the object under test. These tests are insanely fast and often don't even require loading the whole Rails environment. The full stack tests exercise the whole system. They are painfully slow and give useless feedback when they fail. I write as few as necessary, but enough to give me confidence that all my well-tested objects integrate well.
Unfortunately, I can't point you to an example project that does this well (yet). I talk a little about it in my presentation on Why Our Code Smells, watch Corey Haines' presentation on Fast Rails Tests, and I highly recommend reading Growing Object Oriented Software Guided by Tests.
Thanks for compiling the quotes from 2007. It is fun to look back.
My current testing approach is covered in this RailsCasts episode which I have been quite happy with. In summary I have two levels of tests.
High level: I use request specs in RSpec, Capybara, and VCR. Tests can be flagged to execute JavaScript as necessary. Mocking is avoided here because the goal is to test the entire stack. Each controller action is tested at least once, maybe a few times.
Low level: This is where all complex logic is tested - primarily models and helpers. I avoid mocking here as well. The tests hit the database or surrounding objects when necessary.
Notice there are no controller or view specs. I feel these are adequately covered in request specs.
Since there is little mocking, how do I keep the tests fast? Here are some tips.
Avoid excessive branching logic in the high level tests. Any complex logic should be moved to the lower level.
When generating records (such as with Factory Girl), use build first and only switch to create when necessary.
Use Guard with Spork to skip the Rails startup time. The relevant tests are often done within a few seconds after saving the file. Use a :focus tag in RSpec to limit which tests run when working on a specific area. If it's a large test suite, set all_after_pass: false, all_on_start: false in the Guardfile to only run them all when needed.
I use multiple assertions per test. Executing the same setup code for each assertion will greatly increase the test time. RSpec will print out the line that failed so it is easy to locate it.
I find mocking adds brittleness to the tests which is why I avoid it. True, it can be great as an aid for OO design, but in the structure of a Rails app this doesn't feel as effective. Instead I rely heavily on refactoring and let the code itself tell me how the design should go.
This approach works best on small-medium size Rails applications without extensive, complex domain logic.
Great questions and great discussion. #ryanb and #bkeepers mention that they only write two types of tests. I take a similar approach, but have a third type of test:
Unit tests: isolated tests, typically, but not always, against plain ruby objects. My unit tests don't involve the DB, 3rd party API calls, or any other external stuff.
Integration tests: these are still focused on testing one class; the differences is that they integrate that class with the external stuff I avoid in my unit tests. My models will often have both unit tests and integration tests, where the unit tests focus in the pure logic that can be tested w/o involving the DB, and the integration tests will involve the DB. In addition, I tend to test 3rd party API wrappers with integration tests, using VCR to keep the tests fast and deterministic, but letting my CI builds make the HTTP requests for real (to catch any API changes).
Acceptance tests: end-to-end tests, for an entire feature. This isn't just about UI testing via capybara; I do the same in my gems, which may not have an HTML UI at all. In those cases, this exercises whatever the gem does end-to-end. I also tend to use VCR in these tests (if they make external HTTP requests), and like in my integration tests, my CI build is setup to make the HTTP requests for real.
As far as mocking goes, I don't have a "one size fits all" approach. I've definitely overmocked in the past, but I still find it to be a very useful technique, especially when using something like rspec-fire. In general, I mock collaborators playing roles freely (particularly if I own them, and they are service objects) and try to avoid it in most other cases.
Probably the biggest change to my testing over the last year or so has been inspired by DAS: whereas I used to have a spec_helper.rb that loads the entire environment, now I explicitly load just the class-under test (and any dependencies). Besides the improved test speed (which does make a huge difference!) it helps me identify when my class-under-test is pulling in too many dependencies.

Pros and cons of using callbacks for domain logic in Rails

What do you see as the pros and cons of using callbacks for domain logic? (I'm talking in the context of Rails and/or Ruby projects.)
To start the discussion, I wanted to mention this quote from the Mongoid page on callbacks:
Using callbacks for domain logic is a bad design practice, and can lead to
unexpected errors that are hard to debug when callbacks in the chain halt
execution. It is our recommendation to only use them for cross-cutting
concerns, like queueing up background jobs.
I would be interested to hear the argument or defense behind this claim. Is it intended to apply only to Mongo-backed applications? Or it is intended to apply across database technologies?
It would seem that The Ruby on Rails Guide to ActiveRecord Validations and Callbacks might disagree, at least when it comes to relational databases. Take this example:
class Order < ActiveRecord::Base
before_save :normalize_card_number, :if => :paid_with_card?
end
In my opinion, this is a perfect example of a simple callback that implements domain logic. It seems quick and effective. If I was to take the Mongoid advice, where would this logic go instead?
I really like using callbacks for small classes. I find it makes a class very readable, e.g. something like
before_save :ensure_values_are_calculated_correctly
before_save :down_case_titles
before_save :update_cache
It is immediately clear what is happening.
I even find this testable; I can test that the methods themselves work, and I can test each callback separately.
I strongly believe that callbacks in a class should only be used for aspects that belong to the class. If you want to trigger events on save, e.g. sending a mail if an object is in a certain state, or logging, I would use an Observer. This respects the single responsibility principle.
Callbacks
The advantage of callbacks:
everything is in one place, so that makes it easy
very readable code
The disadvantage of callbacks:
since everything is one place, it is easy to break the single responsibility principle
could make for heavy classes
what happens if one callback fails? does it still follow the chain? Hint: make sure your callbacks never fail, or otherwise set the state of the model to invalid.
Observers
The advantage of Observers
very clean code, you could make several observers for the same class, each doing a different thing
execution of observers is not coupled
The disadvantage of observers
at first it could be weird how behaviour is triggered (look in the observer!)
Conclusion
So in short:
use callbacks for the simple, model-related stuff (calculated values, default values, validations)
use observers for more cross-cutting behaviour (e.g. sending mail, propagating state, ...)
And as always: all advice has to be taken with a grain of salt. But in my experience Observers scale really well (and are also little known).
Hope this helps.
EDIT: I have combined my answers on the recommendations of some people here.
Summary
Based on some reading and thinking, I have come to some (tentative) statements of what I believe:
The statement "Using callbacks for domain logic is a bad design practice" is false, as written. It overstates the point. Callbacks can be good place for domain logic, used appropriately. The question should not be if domain model logic should go in callbacks, it is what kind of domain logic makes sense to go in.
The statement "Using callbacks for domain logic ... can lead to unexpected errors that are hard to debug when callbacks in the chain halt execution" is true.
Yes, callbacks can cause chain reactions that affect other objects. To the degree that this is not testable, this is a problem.
Yes, you should be able to test your business logic without having to save an object to the database.
If one object's callbacks get too bloated for your sensibilities, there are alternative designs to consider, including (a) observers or (b) helper classes. These can cleanly handle multi object operations.
The advice "to only use [callbacks] for cross-cutting concerns, like queueing up background jobs" is intriguing but overstated. (I reviewed cross-cutting concerns to see if I was perhaps overlooking something.)
I also want to share some of my reactions to blog posts I've read that talk about this issue:
Reactions to "ActiveRecord's Callbacks Ruined My Life"
Mathias Meyer's 2010 post, ActiveRecord's Callbacks Ruined My Life, offers one perspective. He writes:
Whenever I started adding validations and callbacks to a model in a Rails application [...] It just felt wrong. It felt like I'm adding code that shouldn't be there, that makes everything a lot more complicated, and turns explicit into implicit code.
I find this last claim "turns explicit into implicit code" to be, well, an unfair expectation. We're talking about Rails here, right?! So much of the value add is about Rails doing things "magically" e.g. without the developer having to do it explicitly. Doesn't it seem strange to enjoy the fruits of Rails and yet critique implicit code?
Code that is only being run depending on the persistence state of an object.
I agree that this sounds unsavory.
Code that is being hard to test, because you need to save an object to test parts of your business logic.
Yes, this makes testing slow and difficult.
So, in summary, I think Mathias adds some interesting fuel to the fire, though I don't find all of it compelling.
Reactions to "Crazy, Heretical, and Awesome: The Way I Write Rails Apps"
In James Golick's 2010 post, Crazy, Heretical, and Awesome: The Way I Write Rails Apps, he writes:
Also, coupling all of your business logic to your persistence objects can have weird side-effects. In our application, when something is created, an after_create callback generates an entry in the logs, which are used to produce the activity feed. What if I want to create an object without logging — say, in the console? I can't. Saving and logging are married forever and for all eternity.
Later, he gets to the root of it:
The solution is actually pretty simple. A simplified explanation of the problem is that we violated the Single Responsibility Principle. So, we're going to use standard object oriented techniques to separate the concerns of our model logic.
I really appreciate that he moderates his advice by telling you when it applies and when it does not:
The truth is that in a simple application, obese persistence objects might never hurt. It's when things get a little more complicated than CRUD operations that these things start to pile up and become pain points.
This question right here ( Ignore the validation failures in rspec ) is an excellent reason why to not put logic in your callbacks: Testability.
Your code can have a tendency to develop many dependencies over time, where you start adding unless Rails.test? into your methods.
I recommend only keeping formatting logic in your before_validation callback, and moving things that touch multiple classes out into a Service object.
So in your case, I would move the normalize_card_number to a before_validation, and then you can validate that the card number is normalized.
But if you needed to go off and create a PaymentProfile somewhere, I would do that in another service workflow object:
class CreatesCustomer
def create(new_customer_object)
return new_customer_object unless new_customer_object.valid?
ActiveRecord::Base.transaction do
new_customer_object.save!
PaymentProfile.create!(new_customer_object)
end
new_customer_object
end
end
You could then easily test certain conditions, such as if it is not-valid, if the save doesn't happen, or if the payment gateway throws an exception.
In my opinion, the best scenario for using callbacks is when the method firing it up has nothing to do with what's executed in the callback itself. For example, a good before_save :do_something should not execute code related to saving. It's more like how an Observer should work.
People tend to use callbacks only to DRY their code. It's not bad, but can lead to complicated and hard to maintain code, because reading the save method does not tell you all it does if you don't notice a callback is called. I think it is important to explicit code (especially in Ruby and Rails, where so much magic happens).
Everything related to saving should be be in the save method. If, for example, the callback is to be sure that the user is authenticated, which has no relation to saving, then it is a good callback scenario.
Avdi Grimm have some great examples in his book Object On Rails.
You will find here and here why he do not choose the callback option and how you can get rid of this simply by overriding the corresponding ActiveRecord method.
In your case you will end up with something like :
class Order < ActiveRecord::Base
def save(*)
normalize_card_number if paid_with_card?
super
end
private
def normalize_card_number
#do something and assign self.card_number = "XXX"
end
end
[UPDATE after your comment "this is still callback"]
When we are speaking of callbacks for domain logic, I understand ActiveRecord callbacks, please correct me if you think the quote from Mongoid referer to something else, if there is a "callback design" somewhere I did not find it.
I think ActiveRecord callbacks are, for the most (entire?) part nothing more than syntactic sugar you can rid of by my previous example.
First, I agree that this callbacks method hide the logic behind them : for someone who is not familiar with ActiveRecord, he will have to learn it to understand the code, with the version above, it is easily understandable and testable.
Which could be worst with the ActiveRecord callbacks his their "common usage" or the "decoupling feeling" they can produce. The callback version may seems nice at first but as you will add more callbacks, it will be more difficult to understand your code (in which order are they loaded, which one may stop the execution flow, etc...) and test it (your domain logic is coupled with ActiveRecord persistence logic).
When I read my example below, I feel bad about this code, it's smell. I believe you probably do not end up with this code if you were doing TDD/BDD and, if you forget about ActiveRecord, I think you would simply have written the card_number= method. I hope this example is good enough to not directly choose the callback option and think about design first.
About the quote from MongoId I'm wondering why they advice to not use callback for domain logic but to use it to queueing background job. I think queueing background job could be part of the domain logic and may sometimes be better designed with something else than a callback (let's say an Observer).
Finally, there is some criticism about how ActiveRecord is used / implemented with Rail from an Object Oriented programming design point of view, this answer contain good information about it and you will find more easily. You may also want to check the datamapper design pattern / ruby implementation project which could be replacement (but how much better) for ActiveRecord and do not have his weakness.
I don't think the answer is all too complicated.
If you're intending to build a system with deterministic behavior, callbacks that deal with data-related things such as normalization are OK, callbacks that deal with business logic such as sending confirmation emails are not OK.
OOP was popularized with emergent behavior as a best practice1, and in my experience Rails seems to agree. Many people, including the guy who introduced MVC, think this causes unnecessary pain for applications where runtime behavior is deterministic and well known ahead of time.
If you agree with the practice of OO emergent behavior, then the active record pattern of coupling behavior to your data object graph isn't such a big deal. If (like me) you see/have felt the pain of understanding, debugging and modifying such emergent systems, you will want to do everything you can to make the behavior more deterministic.
Now, how does one design OO systems with the right balance of loose coupling and deterministic behavior? If you know the answer, write a book, I'll buy it! DCI, Domain-driven design, and more generally the GoF patterns are a start :-)
http://www.artima.com/articles/dci_vision.html, "Where did we go wrong?". Not a primary source, but consistent with my general understanding and subjective experience of in-the-wild assumptions.

Where to use `FactoryGirl.build_stubbed` and where to use RSpec's `mock`/`double`

I'm just curious where people tend to use FactoryGirl.build_stubbed and where they use double when writing RSpec specs. That is, are there best practices like "only use FactoryGirl methods in their corresponding model specs?"
Is it a code smell when you find yourself using FactoryGirl.create(:foo) in spec/models/bar_spec.rb?
Is it less of a code smell if you're using FactoryGirl.build_stubbed(:foo) in spec/models/bar_spec.rb?
Is it a code smell if you're using FactoryGirl.create(:foo) in foos_controller_spec.rb?
Is it less of a code smell if you're using FactoryGirl.build_stubbed(:foo) in foos_controller_spec.rb?
Is it a code smell if you're using FactoryGirl.build_stubbed(:foo) in spec/decorators/foo_decorator_spec.rb?
Sorry for so many questions! I just would love to know how other people draw the lines in unit test isolation and object oriented design best practices.
Thanks!
I believe that there are best practices that guide us to think about when to use mocks (in this case "doubles") versus integrating against real dependencies (in this case "Factories"). There is a really good book on testing (caveat: it uses Java examples) that describes the purpose of test-driven development, and I think it's very helpful in this discussion on testing in Rails applications. It describes the intention of testing as follows:
... our intention in test-driven development is to use mock objects to bring out relationships between objects.
Freeman, Steve; Pryce, Nat (2009-10-12). Growing Object-Oriented Software, Guided by Tests (Kindle Locations 3878-3879). Pearson Education (USA). Kindle Edition.
If we think about this emphasis on using test-driven development not only to prevent us from introducing regressions, but to help us think about how our code is structured in terms of its interface and relationships to other objects we will naturally use mocks in many cases. I'll describe how this applies to your specific questions below.
First, in terms of whether we use mock objects or real dependencies in model tests - if we're testing class Foo and its dependency on Bar, we may want to substitute a mock for Bar. In this way we will see clearly the level of coupling to Bar as we'll have to mock the methods that will be called on it. If we find that our mock of Bar is complex, it's a sign that perhaps we should refactor Foo and Bar so that they are less coupled to one another.
In the sense that both Factory.create and Factory.build_stubbed have the same effect of keeping you from making dependencies on related classes explicit, I think they're both about as smelly, with Factory.create being the slower of the two options.
In my tests I tend to not worry too much about mocking external dependencies in controllers. I know that this is slower to run than fully mocking, and you don't have the benefit of making the controller coupling to the model explicit, but it's quicker to write the test, and I'm not generally as worried about making clear the relationship between controllers and the persisted records that they manage. As long as you follow patterns of "skinny controllers" there shouldn't be too logic to worry about here anyway. If we need to specify a level of "test smell" here I would say that it's a bit less smelly than model tests that depend on other factories.
I would tend to worry least about Decorators that depend on factories of the classes that they decorate. This is because by definition the decorator should maintain the same interface as the class they decorate. Decoration is most often implemented with some form of inheritance, whether by using method_missing to delegate to the decoratee, or by explicit subclassing of the decoratee. Because of this you're breaking other rules of good object-oriented programming like Liskov Substitution if the decorator deviates too much from the interface of the thing it decorates. As long as your decoration isn't implemented poorly by breaking rules of good inheritance, the coupling to the class that you decorate is already present, so it's not making things much worse if you have the test of the decoratee depend on a persisted or stubbed factory of the thing that it decorates. You can go crazy with factories in decorator tests and it doesn't matter IMO.
I think that it's important to note that even if you prefer mocks in most cases you should still have some integration tests that use real dependencies. You'll just find that these cover specific high-value cases, where the isolated unit tests provide more coverage of functionality provided by your classes.
At any rate I break all of the above rules sometimes and they are just some guidelines that I use in writing tests. I'm looking forward to hearing how others are using Factories (build_stubbed and really persisted) versus mock objects (doubles) in their tests.

Can RSpec be used as a bruteforcing mechanism?

Little by little i begin to understand the power of Rspec, though i still do not see why i would need to use it to test controllers or views (i'm sure there are reasons behind it).
I'm creating a browser game where users attack monsters. In my head, Rspec would be really really useful if it could provide a bruteforcing mechanism for me. For instance, let's say that i want to have a certain user fight all the monsters one by one and provide some conditions that will trigger the tests to fail.
For example, if a user fights a monster of the same level, hp, and about the same strength, it would be really weird if he/she is killed while the monster still has more than 70% of its hp (that's just a scenario).
It seems to me that this kind of behaviour is tested with rspec in combination with cucumber ? I would really like to get some insight on that topic.
Seems to me that the example you give is well suited for Cucumber. You are trying to test what happens when a user fights a certain monster. You would set up each scenario and then go through steps to exercise various portions of the user experience.
rSpec is for unit testing, i.e. making sure that each method of your models, controllers and views does the proper thing and gives you proper results. By definition, a unit test isolates itself to the code you are test so, for example, if a controller method needs data from a model, that data is mocked or stubbed for each condition of the method being tested. That way your test is not affected by other parts of the code not really under test.

What not to test in Rails?

I've been writing tests for a while now and I'm starting to get the hang of things. But I've got some questions concerning how much test coverage is really necessary. The consensus seems pretty clear: more coverage is always better. But, from a beginner's perspective at least, I wonder if this is really true.
Take this totally vanilla controller action for example:
def create
#event = Event.new(params[:event])
if #event.save
flash[:notice] = "Event successfully created."
redirect_to events_path
else
render :action => 'new'
end
end
Just the generated scaffolding. We're not doing anything unusual here. Why is it important to write controller tests for this action? After all, we didn't even write the code - the generator did the work for us. Unless there's a bug in rails, this code should be fine. It seems like testing this action is not all too different from testing, say, collection_select - and we wouldn't do that. Furthermore, assuming we're using cucumber, we should already have the basics covered (e.g. where it redirects).
The same could even be said for simple model methods. For example:
def full_name
"#{first_name} #{last_name}"
end
Do we really need to write tests for such simple methods? If there's a syntax error, you'll catch it on page refresh. Likewise, cucumber would catch this so long as your features hit any page that called the full_name method. Obviously, we shouldn't be relying on cucumber for anything too complex. But does full_name really need a unit test?
You might say that because the code is simple the test will also be simple. So you might as well write a test since it's only going to take a minute. But it seems that writing essentially worthless tests can do more harm than good. For example, they clutter up your specs making it more difficult to focus on the complex tests that actually matter. Also, they take time to run (although probably not much).
But, like I said, I'm hardly an expert tester. I'm not necessarily advocating less test coverage. Rather, I'm looking for some expert advice. Is there actually a good reason to be writing such simple tests?
My experience in this is that you shouldn't waste your time writing tests for code that is trivial, unless you have a lot of complex stuff riding on the correctness of that triviality. I, for one, think that testing stuff like getters and setters is a total waste of time, but I'm sure that there'll be more than one coverage junkie out there who'll be willing to oppose me on this.
For me tests facilitate three things:
They garantuee unbroken old functionality If I can check that
nothing new that I put in has broken
my old things by running tests, it's
a good thing.
They make me feel secure when I rewrite old stuff The code I
refactor is very rarely the trivial
one. If, however, I want to refactor
untrivial code, having tests to
ensure that my refactorings have not
broken any behavior is a must.
They are the documentation of my work Untrivial code needs to be
documented. If, however, you agree
with me that comments in code is the
work of the devil, having clear and
concise unit tests that make you
understand what the correct behavior
of something is, is (again) a must.
Anything I'm sure I won't break, or that I feel is unnessecary to document, I simply don't waste time testing. Your generated controllers and model methods, then, I would say are all fine even without unit tests.
The only absolute rule is that testing should be cost-efficient.
Any set of practical guidelines to achieve that will be controversial, but here are some advices to avoid tests that will be generally wasteful, or do more harm than good.
Unit
Don't test private methods directly, only assess their effects indirectly through the public methods that call them.
Don't test internal states
Only test non-trivial methods, where different contexts may get different results (calculations, concatenation, regexes, branches...)
Don't assess things you don't care about, e.g. full copy on some message or useless parts of complex data structures returned by an API...
Stub all the things in unit tests, they're called unit tests because you're only testing one class, not its collaborators. With stubs/spies, you test the messages you send them without testing their internal logic.
Consider private nested classes as private methods
Integration
Don't try to test all the combinations in integration tests. That's what unit tests are for. Just test happy-paths or most common cases.
Don't use Cucumber unless you really BDD
Integration tests don't always need to run in the browser. To test more cases with less of a performance hit you can have some integration tests interact directly with model classes.
Don't test what you don't own. Integration tests should expect third-party dependencies to do their job, but not substitute to their own test suite.
Controller
In controller tests, only test controller logic: Redirections, authentication, permissions, HTTP status. Stub the business logic. Consider filters, etc. like private methods in unit tests, tested through public controller actions only.
Others
Don't write route tests, except if you're writing an API, for the endpoints not already covered by integration tests.
Don't write view tests. You should be able to change copy or HTML classes without breaking your tests. Just assess critical view elements as part of your in-browser integration tests.
Do test your client JS, especially if it holds some application logic. All those rules also apply to JS tests.
Ignore any of those rules for business-critical stuff, or when something actually breaks (no-one wants to explain their boss/users why the same bug happened twice, that's why you should probably write at least regression tests when fixing a bug).
See more details on that post.
More coverage is better for code quality- but it costs more. There's a sliding scale here, if you're coding an artificial heart, you need more tests. The less you pay upfront, the more likely it is you'll pay later, maybe painfully.
In the example, full_name, why have you placed a space between, and ordered by first_name then last_name- does that matter? If you are later asked to sort by last name, is it ok to swap the order and add a comma? What if the last name is two words- will that additional space affect things? Maybe you also have an xml feed someone else is parsing? If you're not sure what to test, for a simple undocumented function, maybe think about the functionality implied by the method name.
I would think your company's culture is important to consider too. If you're doing more than others, then you're really wasting time. Doesn't help to have a well tested footer, if the main content is buggy. Causing the main build or other developer's builds to break, would be worse though. Finding the balance is hard- unless one is the decider, spend some time reading the test code written by other team members.
Some people take the approach of testing the edge cases, and assume the main features will get worked out through usage. Considering getter/setters, I'd want a model class somewhere, that has a few tests on those methods, maybe test the database column type ranges. This at least tells me the network is ok, a database connection can be made, I have access to write to a table that exists, etc. Pages come and go, so don't consider a page load to be a substitute for an actual unit test. (A testing efficiency side note- if having automated testing based on the file update timestamp (autotest), that test wouldn't run, and you want to know asap)
I'd prefer to have better quality tests, rather than full coverage. But I'd also want an automated tool pointing out what isn't tested. If it's not tested, I assume it's broken. As you find failure, add tests, even if it's simple code.
If you are automating your testing, it doesn't matter how long it takes to run. You benefit every time that test code is run- at that point, you know a minimum of your code's functionality is working, and you get a sense of how reliable the tested functionality has been over time.
100% coverage shouldn't be your goal- good testing should be. It would be misleading to think a single test of a regular expression was accomplishing anything. I'd rather have no tests than one, because my automated coverage report reminds me the RE is unreliable.
The primary benefit you would get from writing a unit test or two for this method would be regression testing. If, sometime in the future, something was changed that impacted this method negatively, you would be able to catch it.
Whether or not that's worth the effort is ultimately up to you.
The secondary benefit I can see by looking at it would be testing edge cases, like, what it should do if last_name is "" or nil. That can reveal unexpected behavior.
(i.e. if last_name is nil, and first_name is "John", you get full_name => "John ")
Again, the cost-vs-benefit is ultimately up to you.
For generated code, no, there's no need to have test coverage there because, as you said, you didn't write it. If there's a problem, it's beyond the scope of the tests, which should be focused on your project. Likewise, you probably wouldn't need to explicitly test any libraries that you use.
For your particular method, it looks like that's the equivalent of a setter (it's been a bit since I've done Ruby on Rails) - testing that method would be testing the language features. If you were changing values or generating output, then you should have a test. But if you are just setting values or returning something with no computation or logic, I don't see the benefit to having tests cover those methods as if they are wrong, you should be able to detect the problem in a visual inspection or the problem is a language defect.
As far as the other methods, if you write them, you should probably have a test for them. In Test-Driven Development, this is essential as the tests for a particular method exist before the method does and you write methods to make the test pass. If you aren't writing your tests first, then you still get some benefit to have at least a simple test in place should this method ever change.

Resources