Is there any tangible value in unit testing your own htmlhelpers? Many of these things just spit out a bunch of html markup - there's little if no logic. So, do you just compare one big html string to another? I mean, some of these thing require you to look at the generated markup in a browser to verify it's the output you want.
Seems a little pointless.
Yes.
While there may be little to no logic now, that doesn't mean that there isn't going to be more logic added down the road. When that logic is added, you want to be sure that it doesn't break the existing functionality.
That's one of the reasons that Unit Tests are written.
If you're following Test Driven Development, you write the test first and then write the code to satisfy test.
That's another reason.
You also want to make sure you identify and test any possible edge cases with your Helper (like un-escaped HTML literals, un-encoded special characters, etc).
I guess it depends on how many people will be using/modifying it. I typically create a unit test for an html helper if I know a lot of people could get their hands on it, or if the logic is complex. If I'm going to be the only one using it though, I'm not going to waste my time (or my employer's money).
I can understand you not wanting to write the tests though ... it can be rather annoying to write a few lines of html generation code that requires 5X that amount to test.
it takes a simple input and exposes a simple output. This is a good one for TDD, since the time you were going to spend on build->start site->fix that silly issue->start again->oops, missed this other tiny thing->start ... we are done, happy :). Dev 2 comes along and makes small change to "fix" it for something that wasn't working for then, same cycle goes on and dev 2 didn't notice at the time it broke your other scenarios.
Instead, you v. quickly do the v. simple simple text, y that simple output gave you that simple output you were expecting with all the closing tags and quotes you were expecting.
Having written HTML Helpers for sitemap menus, for example, or buttons for a wizard framework, I can assure you that some Helpers have plenty of logic that needs testing to be reliable, especially if intended to be used by others.
So it depends what you do with them really. And only you know the answer to that.
The general answer is that Html Helpers can be arbitrarily complex (or simple), depending on what you are doing. So the no brainer, as with anything else, is to test when you need to.
Yes, there's value. How much value is to be determined. ;-)
You might start with basic "returns SOMEthing" tests, and not really care WHAT. Basically just quick sanity tests, in case something fundamental breaks. Then as problems crop up, add more details.
Also consider having your tests parse the HTML into DOMs, which are much easier to test against than strings, particularly if you are looking for just some specific bit.
Or... if you have automated tests against the webapp itself, ensure there are tests that look specifically for the output of your helpers.
Yes it should be tested. Basic rule of thumb, if it is not worth testing it is not worth writing.
However, you need to be a bit carefull here when you write your tests. There is a danger that they can be very "brittle".
If you write your tests such that you get back a specific string, and you have some helpers that call other helpers. A change in one of the core helpers could cause very many tests to fail.
So it maybe better to test that you get back a non null value, or that a specific text is contained somewhere in the return value. Rather than testing for an exact string.
Related
Suppose I have a user class that has a method is_old_enough?
The method just checks that the age is over 18. Seems pretty simple.
Does TDD mean I have to write a test for this, even though it is trivial?
class User
def is_old_enough?
self.age >= 18
end
end
And if so, why? What is the benefit of writing a test for this? You'd just be testing that x >= y works the way you expect the >= operator to work.
Because the most likely scenario I see happening is the following:
It turns out the age should actually be 21. That's a bug that the test didn't catch, because they had the wrong assumptions when we wrote the code. So then they go in and change the method to >= 21. But now the test fails! So they have to go and change the test too. So the test didn't help, and actually gave a false positive alarm when they were refactoring.
Looks like tests for simple methods like this are not actually testing anything useful, and are actually hurting you.
I think you're confusing test coverage and Test-Driven Development. TDD is just a practice of developing an automated test that is going to verify the use cases of some new feature. Usually it starts off failing because you've stubbed the functionality out or simply haven't implemented it. Next, you develop the feature until the test passes.
The idea is that you are in the mindset of developing tests that verify your important use cases/features. This doesn't necessarily mean you need to test simple functions if you think they are covered by your regular feature tests.
In terms of coverage, that's really up to you as the developer (or team) to decide. Obviously having around 1-to-1 coverage of tests to API is desired, but you have a choice as to whether you think it's always going to be easy enough to implement is_old_enough?. It may seem like an easy implementation now, but perhaps that will change in the future. You just need to be mindful of those kinds of decisions when you choose whether to write a test or not. More than likely, though, your use case won't change and the test is easy to write. It doesn't hurt to feel confident in all areas of your code.
I think the question has more to do with unit testing than TDD in particular.
Short answer: focus on behaviors
Long answer: Well, there is a nice phrase out there: BDD is TDD done right, and I completely agree. While BDD and TDD are is large part the "same" thing (not equal, mind you!), BDD for me gives the context for doing TDD. You can read a lot on the Internet around this, so I will not write essays here, but let me say this:
In your example, yes, the test is necessary because the rule that
user is old enough is a behavior of User entity. Test serves as a
safety net for many other things yet to come which would rely on this
piece of information, and test for me would document this behavior
very well (I actually tend to read tests to find out what the developer had in mind when writing a class - you learn what to expect, how the class behaves, what are the edge cases etc.)
I don't really see how the test would not catch the refactoring, since I would write the test with numbers 18, 19, 25 and 55 in mind (just a bunch of asserts typed very fast very easily)
Very important piece of the puzzle is that unit tests are just one technique that you need. If your design is lacking, you will find yourself writing too many meaningless tests, or you will have hell testing classes doing multiple things etc. You need to have very good SOLID skills to be able to shape out classes in a way that testing only their public interfaces (this includes protected methods as well) actually tests the entire class. As said before, focusing on behaviors is the key here.
I'm having a hard time learning how to implement appropriate testing with rspec and rails.
Speaking generally, how does one test the "hard cases" such as did a file get generated or was an image properly resized? Currently, to me it seems that unit testing is designed for testing the transformation of values rather than making sure data is processed and output appropriately.
TL;DR You should explore for different tools and don't shy away from the "hard cases" that may not seem like traditional "unit tests". The hard cases often are testable.
Testing is definitely a rabbit hole. From the surface, it seems so nice and clean, but as you get in to it, the rabbit hole goes quite far down, and this is one example: how do you test these things? You want to be confident that your code is doing the right thing, but you also don't want to create too complicated and unmanageable tests, and you don't want to test too fine-grained either.
However, to your specific questions, I do have some ideas you may wish to look in to.
For testing that a file got generated, you can check to see that initially the file did not exist (ruby has File.exists?) and then you can, after some method, check to see if it does exist. Of course you have questions like, "Does it have the right content?", "Did it finish to completion?", and you can test that stuff too by opening the file and checking it.
For the images, you probably can find facilities that allow you to check the properties of an image (perhaps Paperclip? Never used it but it's a well known gem). So you can do something like this (in sort-of psuedo code, because I don't know of a tool to do this):
it "resizes the image" do
img = Image.open_image("pic.png")
img[:size].should eq [100, 100]
img.close
resize_image
image = Image.open_image("pic.png")
imge[:size].should eq [25, 25]
img.close
end
Testing often relies on finding more and more helpful gems/tools to massage situations. "Unit" tests will, yes, only check the unit level code and they may be very simple, but not always. But then you start looking in to library specs, request specs, routing specs, acceptance tests, controller specs, etc. There are lot of tools out there and research is key.
But again, the examples you listed may not be unit tests in the way you think of. If your resizing or file generation is being done off of a Model, then yes it is a unit test, but you're no longer writing that simple code (like accessors and mutators). A lot of the time people new to thorough testing won't think that everything is testable, and it may not be, but if you play around and explore, you can often find a way to do so.
I understand that rspec is used to test with specific examples, but it seems to me that errors most times are of dynamic nature. Therefore, i'm having some doubts if unit testing is really useful for me ( i don't say that rspec is not useful of course).
I'm thinking that if i add a validation on a model, this enforces its behaviour. Since i already know that i will put it, what is the real point behind creating a test for it first ? After all, it will always pass if i don't change the validation (in which case i will notice of course). Provided that i'm the only developer, isn't that a little bit too much work for no real reason ?
Moreover, let's say that i have a test that specifies that user.name must be either 'Tom' or 'John'. My tests work great, because i specify user.name inside the test. However, in the real application, it may happen that name becomes 'Alex'. Rspec would not be able to enforce the behaviour, would it ?
And i would be left with a passing test, but an error.
What do you think about all that ? Are my concerns correct or i am not thinking it well ? I need to know whether i would get some strong benefits from messing with rspec, or it would mostly be a waste of time.
(Again i understand that rspec can be useful, but what about the matters that i specify here ?)
I'm thinking that if i add a validation on a model, this enforces its behaviour.
Adding validation to your model gives that model some behavior but does not enforce it. A behavior is only required when some other piece of code depends on it and will break if it changes. Until a behavior is used removing it would have no impact on your project and so nothing enforces that the behavior must be present.
Since i already know that i will put it, what is the real point behind creating a test for it first ? After all, it will always pass if i don't change the validation (in which case i will notice of course).
Writing tests, especially writing tests first, gives you an immediate use of your code. Something which requires a behavior to be present and which should fail quickly, reliably, and automatically if that behavior changes. Tests enforce the public interface to your code because otherwise you will change that interface and you will not notice.
Provided that i'm the only developer, isn't that a little bit too much work for no real reason ?
You may be the only person working on the project but you can't remember everything. Write tests so that next week you have something to make sure you don't violate the assumptions you made today.
Moreover, let's say that i have a test that specifies that user.name must be either 'Tom' or 'John'.
That is not specific enough to be a good test. You can test that "a user should be valid when user.name is 'Tom'" or "user.name must be included in ['Tom', 'John']" or even "a user should be invalid if user.name is 'Alex'". You cannot hope to write a test for all possible inputs to your application so you need to make intelligent choices about what to test. Test that valid inputs produce valid results. Test that invalid inputs fail in expected ways. Don't worry about testing all possible invalid inputs or invalid uses of your code.
If it is not valid for "user.name" to be "Alex" then perhaps you should test that the code calling your User object does not try to set its name to "Alex". If "Alex" is a valid name but your code failed anyway then you should write more robust code, better tests, and a test for the name "Alex" to make sure you fixed your User class to handle that name.
Perhaps most importantly, if you are writing tests first then they can actually drive you to design a better interface for your User class. One which more clearly expresses the behavior of the "name" attribute and discourages you from setting invalid names.
Tests are tests. They test things. They don't enforce things.
They are useful because you can see what is and isn't working in your application. When you override that name= setter to do something fancy and it breaks on a simple case that you had written a test for, that test just saved your ass. For simple cases like this, going without a test might be okay. It's really rare that you see 100% test coverage in non-open-source applications. Until you learn what you don't need to test, though, it's easier to just write tests for everything you can.
If you don't understand Test Driven Development or why you would test, I think you should Google around on the subject a bit and you can get a good taste of what there is out there and why you should use it (and you should).
Test cases, in my opinion, are sort of like documenting the requirements. We ensure that these requirements are met when we pass the tests. The first time it wont make sense, as we will be writing the code with requirements on mind. It is when we have to change code to incorporate something else also (or just refractor the code for performance), the tests really come into play. This time, we have to ensure that the previous requirements are also met while making the change i.e. the test cases do not fail. This way, by having tests, we are making a note of the requirements and making sure the requirements are met, when we are changing or refractor the code.
I've been writing tests for a while now and I'm starting to get the hang of things. But I've got some questions concerning how much test coverage is really necessary. The consensus seems pretty clear: more coverage is always better. But, from a beginner's perspective at least, I wonder if this is really true.
Take this totally vanilla controller action for example:
def create
#event = Event.new(params[:event])
if #event.save
flash[:notice] = "Event successfully created."
redirect_to events_path
else
render :action => 'new'
end
end
Just the generated scaffolding. We're not doing anything unusual here. Why is it important to write controller tests for this action? After all, we didn't even write the code - the generator did the work for us. Unless there's a bug in rails, this code should be fine. It seems like testing this action is not all too different from testing, say, collection_select - and we wouldn't do that. Furthermore, assuming we're using cucumber, we should already have the basics covered (e.g. where it redirects).
The same could even be said for simple model methods. For example:
def full_name
"#{first_name} #{last_name}"
end
Do we really need to write tests for such simple methods? If there's a syntax error, you'll catch it on page refresh. Likewise, cucumber would catch this so long as your features hit any page that called the full_name method. Obviously, we shouldn't be relying on cucumber for anything too complex. But does full_name really need a unit test?
You might say that because the code is simple the test will also be simple. So you might as well write a test since it's only going to take a minute. But it seems that writing essentially worthless tests can do more harm than good. For example, they clutter up your specs making it more difficult to focus on the complex tests that actually matter. Also, they take time to run (although probably not much).
But, like I said, I'm hardly an expert tester. I'm not necessarily advocating less test coverage. Rather, I'm looking for some expert advice. Is there actually a good reason to be writing such simple tests?
My experience in this is that you shouldn't waste your time writing tests for code that is trivial, unless you have a lot of complex stuff riding on the correctness of that triviality. I, for one, think that testing stuff like getters and setters is a total waste of time, but I'm sure that there'll be more than one coverage junkie out there who'll be willing to oppose me on this.
For me tests facilitate three things:
They garantuee unbroken old functionality If I can check that
nothing new that I put in has broken
my old things by running tests, it's
a good thing.
They make me feel secure when I rewrite old stuff The code I
refactor is very rarely the trivial
one. If, however, I want to refactor
untrivial code, having tests to
ensure that my refactorings have not
broken any behavior is a must.
They are the documentation of my work Untrivial code needs to be
documented. If, however, you agree
with me that comments in code is the
work of the devil, having clear and
concise unit tests that make you
understand what the correct behavior
of something is, is (again) a must.
Anything I'm sure I won't break, or that I feel is unnessecary to document, I simply don't waste time testing. Your generated controllers and model methods, then, I would say are all fine even without unit tests.
The only absolute rule is that testing should be cost-efficient.
Any set of practical guidelines to achieve that will be controversial, but here are some advices to avoid tests that will be generally wasteful, or do more harm than good.
Unit
Don't test private methods directly, only assess their effects indirectly through the public methods that call them.
Don't test internal states
Only test non-trivial methods, where different contexts may get different results (calculations, concatenation, regexes, branches...)
Don't assess things you don't care about, e.g. full copy on some message or useless parts of complex data structures returned by an API...
Stub all the things in unit tests, they're called unit tests because you're only testing one class, not its collaborators. With stubs/spies, you test the messages you send them without testing their internal logic.
Consider private nested classes as private methods
Integration
Don't try to test all the combinations in integration tests. That's what unit tests are for. Just test happy-paths or most common cases.
Don't use Cucumber unless you really BDD
Integration tests don't always need to run in the browser. To test more cases with less of a performance hit you can have some integration tests interact directly with model classes.
Don't test what you don't own. Integration tests should expect third-party dependencies to do their job, but not substitute to their own test suite.
Controller
In controller tests, only test controller logic: Redirections, authentication, permissions, HTTP status. Stub the business logic. Consider filters, etc. like private methods in unit tests, tested through public controller actions only.
Others
Don't write route tests, except if you're writing an API, for the endpoints not already covered by integration tests.
Don't write view tests. You should be able to change copy or HTML classes without breaking your tests. Just assess critical view elements as part of your in-browser integration tests.
Do test your client JS, especially if it holds some application logic. All those rules also apply to JS tests.
Ignore any of those rules for business-critical stuff, or when something actually breaks (no-one wants to explain their boss/users why the same bug happened twice, that's why you should probably write at least regression tests when fixing a bug).
See more details on that post.
More coverage is better for code quality- but it costs more. There's a sliding scale here, if you're coding an artificial heart, you need more tests. The less you pay upfront, the more likely it is you'll pay later, maybe painfully.
In the example, full_name, why have you placed a space between, and ordered by first_name then last_name- does that matter? If you are later asked to sort by last name, is it ok to swap the order and add a comma? What if the last name is two words- will that additional space affect things? Maybe you also have an xml feed someone else is parsing? If you're not sure what to test, for a simple undocumented function, maybe think about the functionality implied by the method name.
I would think your company's culture is important to consider too. If you're doing more than others, then you're really wasting time. Doesn't help to have a well tested footer, if the main content is buggy. Causing the main build or other developer's builds to break, would be worse though. Finding the balance is hard- unless one is the decider, spend some time reading the test code written by other team members.
Some people take the approach of testing the edge cases, and assume the main features will get worked out through usage. Considering getter/setters, I'd want a model class somewhere, that has a few tests on those methods, maybe test the database column type ranges. This at least tells me the network is ok, a database connection can be made, I have access to write to a table that exists, etc. Pages come and go, so don't consider a page load to be a substitute for an actual unit test. (A testing efficiency side note- if having automated testing based on the file update timestamp (autotest), that test wouldn't run, and you want to know asap)
I'd prefer to have better quality tests, rather than full coverage. But I'd also want an automated tool pointing out what isn't tested. If it's not tested, I assume it's broken. As you find failure, add tests, even if it's simple code.
If you are automating your testing, it doesn't matter how long it takes to run. You benefit every time that test code is run- at that point, you know a minimum of your code's functionality is working, and you get a sense of how reliable the tested functionality has been over time.
100% coverage shouldn't be your goal- good testing should be. It would be misleading to think a single test of a regular expression was accomplishing anything. I'd rather have no tests than one, because my automated coverage report reminds me the RE is unreliable.
The primary benefit you would get from writing a unit test or two for this method would be regression testing. If, sometime in the future, something was changed that impacted this method negatively, you would be able to catch it.
Whether or not that's worth the effort is ultimately up to you.
The secondary benefit I can see by looking at it would be testing edge cases, like, what it should do if last_name is "" or nil. That can reveal unexpected behavior.
(i.e. if last_name is nil, and first_name is "John", you get full_name => "John ")
Again, the cost-vs-benefit is ultimately up to you.
For generated code, no, there's no need to have test coverage there because, as you said, you didn't write it. If there's a problem, it's beyond the scope of the tests, which should be focused on your project. Likewise, you probably wouldn't need to explicitly test any libraries that you use.
For your particular method, it looks like that's the equivalent of a setter (it's been a bit since I've done Ruby on Rails) - testing that method would be testing the language features. If you were changing values or generating output, then you should have a test. But if you are just setting values or returning something with no computation or logic, I don't see the benefit to having tests cover those methods as if they are wrong, you should be able to detect the problem in a visual inspection or the problem is a language defect.
As far as the other methods, if you write them, you should probably have a test for them. In Test-Driven Development, this is essential as the tests for a particular method exist before the method does and you write methods to make the test pass. If you aren't writing your tests first, then you still get some benefit to have at least a simple test in place should this method ever change.
I want to write tests for my app, though each time I look at rspec.info, I really don't see a definite path to take towards "doing things right" and testing first. I watched the peepcode videos on rspec more than once, yet it doesn't take. I want to take more pride in my work, and I think that testing will help. How can I break through this mental block?
Find tools that will reward you for testing. For example, make it very easy to run all the tests and get a message like
73 tests passed.
Try random testing because you can test against a lot of values quickly and easily.
See if your language provides a test-coverage analysis tool that gives you percentage of statement coverage or percentage of block coverage. It is very rewarding to drive code coverage from 60% up to 90%---and if you are lucky, you will find bugs.
My key advice is to quantify your progress in testing so that you can see the numbers going up. That will make it a lot more motivating. (Gee, I wonder what other numbers that go up can be found on this site...)
I was hating it until I started creating a few testing macros. Like logging in or getting to the homepage. I found it fun to start poking at what my testing framework could really do.
It also helped to have someone else get me started by writing a few. Right away I found obvious improvements which made me want to get in there and start improving things.
"Test things you don't want to break."
It might be helpful to prioritise at first. I know that typing out the full three layers of model, view, and controller specs on top of the cucumber acceptance tests can be a chore. So one idea is to just test the most critical things in your app, and add tests as you run into bugs you don't want to see again.
"Always start with a failing test."
Cucumber features plain text "stories" that are pretty awesome for getting some really concrete tests up & running. Maybe that would be one place where you could get started. Cucumber doesn't really work with an AJAX-based app though, for that you'd have to take Selenium or Watir instead. You can start with a failing story before writing a single line of code, and quickly proceed from there to make that story pass.
"Don't test, specify."
Instead of thinking of tests, try to make a mental switch: you're not testing but SPECIFYING how your application will behave. This is design work, not nearly as boring as testing. :)
Think of it like this: if you don't test, your code is broken.
You need to see the value that testing will bring in refactoring and extending your code. Once you have a set of tests that define the behavior of your classes, you can then feel free to start making changes to improve the code. Your tests will provide the confidence that what you're doing isn't breaking the system. When you go to add new functionality to your code, running your existing tests will give you confidence that the new code you've added doesn't break anything else.
The key is to take that long term view. Testing is an investment. It takes a little bit away from the code you could be writing but eventually it will start paying off with interest. The capital that you have stored up will make it much easier to move ahead more quickly when adding new features.
Assuming you already have a list of bugs to fix, I always like to go back through and where ever possible create an automated test that demonstrates the bug. Then fix the bug and watch the test pass. Since you have to test the bug anyway, and the bug should already give you enough information to recreate it, you can see an immediate return on your tests.
Eventually, you'll start to get a feel for putting the tests together and how to write them, and you won't need the "blueprint" of an existing bug.
I wrote a motivation post about just this case couple of days ago. Here is the summary:
Start writing tests whenever you have
an opportunity to do it (ie. whenever
you write some code). Choose any tool
that makes sense to you and write any
test that you feel could cover at
least some tiny behavior of your
application (don’t care about the
coverage or any other scary terms from
the day one). Don’t be afraid about
primitive tests and trivial assertions
- you’ll get more confidence as your test coverage will grow and you’ll
become more and more happier as you’ll
notice that you don’t need to hit F5
that often anymore. Think about
testing in other positive terms - the
better you are at it, the less time
you need to spend with activities you
don’t like (watching the spinning
refresh icon in the browser,
debugging) and more with things you
love.
And here is the whole thing, if you are interested.
As has been mentioned previously, the easiest way to break into testing is with regression testing.
I'd also avoid doing controller specs - they are a PITA. Do heavy model testing, because that's where the logic should be in the first place.
Try spec'ing / testing a plain ruby project before you go off into a rails project.
Well I'll tell you how!
FIRST DO THE FOLLOWING 10 TIMES MANUALLY ON DIFFERENT APPLICATIONS ,BEFORE YOU TRY TO AUTOMATE
the negative scenarios, where the result would come out negative.
it could be wrong data entered and gives you right outputs.
for example a login screen:
There could be many scenarios when correct User wrong PW,Wrong User correct PW.... the most important thing is YOU DONT GIVE UP UNLESS BREAK IT .this is your mantra.
HMMM NOW YOU ARE THINKING LIKE A TESTER NOW TURN TO UR SYSTEM,
JUS WRITE THE NEGATIVES TESTS AND THEIR RESULTS
AND THEM THE POSITVE TESTS
DESIGN IT.
NOW DEVELOP THE FRAMEWORK