What I need is a way inside a Factory.define block to know if the factory has been called using create or build, either explicitly or simply using the default strategy.
I have a factory that has to manually adjust associations that the original author of the code took so far off the rails that normal creation barfs and normal build can be managed. I don't want to adjust those associations in the build case, but I have to in the create case.
I've been looking to see if there is something analogous to 'current_strategy' but I haven't seen anything yet. I know I can distinguish using after_create vs. after_build, but the original author made it so that the act of saving the object without doing the adjustments causes massive unhappiness--save exceptions and garbage in the database.
I currently have no mandate to fix the "models" he wrote and the existing rspec tests use the differentiation to do the right thing at any time. In every case the prior test author(s) have opted to simply never use create, which means setting up most of the test data is an arcane and lengthy process.
Any help would be deeply appreciated--I'm still exercising my GoogleFu but would love to be short circuited...
Oh, this is in Rails 2 (/cry)
thanks!
This sounds like a very strange problem indeed, but since you say that you're cleaning up someone else's code, I'll assume there's no easy way out of this.
I wouldn't approach this from the factory side. The factory shouldn't care because the model (not the factory) is supposed to be the gatekeeper of validity in terms of object structure and associations.
I would write specs that separately create and build objects, and test their associations to make sure they are correct (according to what you want the new behavior to ultimately be). Then, get those specs to pass by refactoring the models to do what you actually need them to do. This is how you clean up legacy code, and alter its behavior - write tests that will pass when the new functionality is correct, and refactor until they pass, making incremental changes with each test/refactoring.
When your new specs are passing, you're well on your way. If the previous author put in specs of their own that verify the previous behavior, then you'll have to work on figuring out which, if any, of those tests are currently valid (many of them may be, since they represent the requirements that the app currently fulfils), and removing ones that aren't.
Related
When writing specs for a simple Rails app, is the following a correct approach for full test coverage?
Write feature specs for all user stories
Write controller specs to ensure that individual action responses are correct and all required variables are set
Write model specs to ensure all methods, validations,e tc. are working as intended
Write mailer specs
Write routing specs
Is this enough, too much (e.g. can I skip some lower-level specs if I've written feature specs), or not enough? Why?
You don't need to write specs for every object in every layer either to get 100% test coverage or to test-drive (require you to implement) all of the important behavior in your application. Instead, as behavior-driven development (BDD) advises, write specs outside in, and write lower-level specs only as necessary.
The most important measure of test completeness is requirement coverage: it's helpful for each user story, and each detail of each story that requires new code, to be represented in at least one test. If you're following typical agile practices (mentioning user stories suggests that you are) your tests are probably the only place where you record your requirements, so you probably can't put a number on this kind of coverage. It's also helpful to have
line coverage (what most people mean when they say test coverage), meaning that every line of code is exercised by at least one test, and
integration coverage, meaning that every method call from one class to another is exercised by at least one test.
For each story,
Write only the feature specs that will test-drive all of the story's distinct happy paths.
Write additional feature specs to ensure integration coverage of architecturally interesting minor variations of happy paths and of sad paths. For example, I often write three feature specs for a story that involves a form: one where the user fills in every possible field and succeeds, another where the user fills in as little information as possible and still succeeds (ensuring that unspecified values and defaults work as intended), and one where the user makes a mistake, fails, corrects the mistake and succeeds.
At this point you've already test-driven every layer (controllers, models, views, helpers, mailers, etc.) into existence, with only feature specs.
Write model and helper specs to drive out detailed requirements which live entirely in those classes. For example, once you've written a single sad-path feature spec that establishes that entering one particular invalid attribute sends the user back to edit their form submission and displays a message, you can handle other invalid attributes entirely by writing more examples in that model's spec that test that model attributes are validated, and let the architecture that you've already test-driven propagate the errors back to the user.
Note that although your feature specs already test the happy paths through model and helper methods, as soon as you start writing examples for a method for minor or error cases, you'll probably want to write the happy-path example or examples for that method too, so you can see the entire description of the method in one place, and so you can test the method fully just by running all its examples and not also have to run any feature specs.
You might not need some kinds of specs at all:
Well-factored controller actions are short and have few or no conditionals, so you often won't need any controller specs at all. Write them only when needed, and stub out model, mailer, etc. behavior to keep them simple and fast.
Similarly, views and mailers should have few or no conditionals (complex code should be refactored into helper and model methods), so you often won't need view or mailer specs at all.
Your feature specs will have test-driven all the routes you need, so you probably won't need routing specs. I've only ever gotten use out of routing specs when I had to do a major refactor of routes, as when upgrading from one major version of Rails to the next.
As long as you always write a test before you write new code, you'll always have 100% line coverage.
That testing strategy sounds really comprehensive. If you had all of these tests in place you would have great test coverage. However it would take you longer to deliver your project. You would also not be agile as someone who is doing more limited testing. Testing has to suit the project. Don't over test. Over testing can cost time and money. Don't under test. Under testing can cost time and money.
There are right ways to do unit testing. There are right ways to do integration testing. The glove has to fit. If your application is largely front end facing then perhaps it's best to start with integration tests. If your writing a back end application or perhaps an API then unit tests maybe a better place to start. I think approaching with one style of testing and then expanding to different styles is a better start than to try and test every layer of your application.
Why not start with simple unit tests? They are easy to write. Write these tests and then track how many bugs you ship. Are you letting in too many bugs? Are you having a lot of regression issues? Are there bugs that are getting through to production that your suite is not picking up? If the answer is yes then maybe it's time to write some higher level tests. Remember the higher level a test is the more development cost you will have to pay.
If your not shipping bugs then you have no reason to write any more tests. Remember the end goal here. We want to ship bug free code. If we can write one test and one test alone that will ensure we are doing this then there is no reason to test any further.
I understand that rspec is used to test with specific examples, but it seems to me that errors most times are of dynamic nature. Therefore, i'm having some doubts if unit testing is really useful for me ( i don't say that rspec is not useful of course).
I'm thinking that if i add a validation on a model, this enforces its behaviour. Since i already know that i will put it, what is the real point behind creating a test for it first ? After all, it will always pass if i don't change the validation (in which case i will notice of course). Provided that i'm the only developer, isn't that a little bit too much work for no real reason ?
Moreover, let's say that i have a test that specifies that user.name must be either 'Tom' or 'John'. My tests work great, because i specify user.name inside the test. However, in the real application, it may happen that name becomes 'Alex'. Rspec would not be able to enforce the behaviour, would it ?
And i would be left with a passing test, but an error.
What do you think about all that ? Are my concerns correct or i am not thinking it well ? I need to know whether i would get some strong benefits from messing with rspec, or it would mostly be a waste of time.
(Again i understand that rspec can be useful, but what about the matters that i specify here ?)
I'm thinking that if i add a validation on a model, this enforces its behaviour.
Adding validation to your model gives that model some behavior but does not enforce it. A behavior is only required when some other piece of code depends on it and will break if it changes. Until a behavior is used removing it would have no impact on your project and so nothing enforces that the behavior must be present.
Since i already know that i will put it, what is the real point behind creating a test for it first ? After all, it will always pass if i don't change the validation (in which case i will notice of course).
Writing tests, especially writing tests first, gives you an immediate use of your code. Something which requires a behavior to be present and which should fail quickly, reliably, and automatically if that behavior changes. Tests enforce the public interface to your code because otherwise you will change that interface and you will not notice.
Provided that i'm the only developer, isn't that a little bit too much work for no real reason ?
You may be the only person working on the project but you can't remember everything. Write tests so that next week you have something to make sure you don't violate the assumptions you made today.
Moreover, let's say that i have a test that specifies that user.name must be either 'Tom' or 'John'.
That is not specific enough to be a good test. You can test that "a user should be valid when user.name is 'Tom'" or "user.name must be included in ['Tom', 'John']" or even "a user should be invalid if user.name is 'Alex'". You cannot hope to write a test for all possible inputs to your application so you need to make intelligent choices about what to test. Test that valid inputs produce valid results. Test that invalid inputs fail in expected ways. Don't worry about testing all possible invalid inputs or invalid uses of your code.
If it is not valid for "user.name" to be "Alex" then perhaps you should test that the code calling your User object does not try to set its name to "Alex". If "Alex" is a valid name but your code failed anyway then you should write more robust code, better tests, and a test for the name "Alex" to make sure you fixed your User class to handle that name.
Perhaps most importantly, if you are writing tests first then they can actually drive you to design a better interface for your User class. One which more clearly expresses the behavior of the "name" attribute and discourages you from setting invalid names.
Tests are tests. They test things. They don't enforce things.
They are useful because you can see what is and isn't working in your application. When you override that name= setter to do something fancy and it breaks on a simple case that you had written a test for, that test just saved your ass. For simple cases like this, going without a test might be okay. It's really rare that you see 100% test coverage in non-open-source applications. Until you learn what you don't need to test, though, it's easier to just write tests for everything you can.
If you don't understand Test Driven Development or why you would test, I think you should Google around on the subject a bit and you can get a good taste of what there is out there and why you should use it (and you should).
Test cases, in my opinion, are sort of like documenting the requirements. We ensure that these requirements are met when we pass the tests. The first time it wont make sense, as we will be writing the code with requirements on mind. It is when we have to change code to incorporate something else also (or just refractor the code for performance), the tests really come into play. This time, we have to ensure that the previous requirements are also met while making the change i.e. the test cases do not fail. This way, by having tests, we are making a note of the requirements and making sure the requirements are met, when we are changing or refractor the code.
I've been writing tests for a while now and I'm starting to get the hang of things. But I've got some questions concerning how much test coverage is really necessary. The consensus seems pretty clear: more coverage is always better. But, from a beginner's perspective at least, I wonder if this is really true.
Take this totally vanilla controller action for example:
def create
#event = Event.new(params[:event])
if #event.save
flash[:notice] = "Event successfully created."
redirect_to events_path
else
render :action => 'new'
end
end
Just the generated scaffolding. We're not doing anything unusual here. Why is it important to write controller tests for this action? After all, we didn't even write the code - the generator did the work for us. Unless there's a bug in rails, this code should be fine. It seems like testing this action is not all too different from testing, say, collection_select - and we wouldn't do that. Furthermore, assuming we're using cucumber, we should already have the basics covered (e.g. where it redirects).
The same could even be said for simple model methods. For example:
def full_name
"#{first_name} #{last_name}"
end
Do we really need to write tests for such simple methods? If there's a syntax error, you'll catch it on page refresh. Likewise, cucumber would catch this so long as your features hit any page that called the full_name method. Obviously, we shouldn't be relying on cucumber for anything too complex. But does full_name really need a unit test?
You might say that because the code is simple the test will also be simple. So you might as well write a test since it's only going to take a minute. But it seems that writing essentially worthless tests can do more harm than good. For example, they clutter up your specs making it more difficult to focus on the complex tests that actually matter. Also, they take time to run (although probably not much).
But, like I said, I'm hardly an expert tester. I'm not necessarily advocating less test coverage. Rather, I'm looking for some expert advice. Is there actually a good reason to be writing such simple tests?
My experience in this is that you shouldn't waste your time writing tests for code that is trivial, unless you have a lot of complex stuff riding on the correctness of that triviality. I, for one, think that testing stuff like getters and setters is a total waste of time, but I'm sure that there'll be more than one coverage junkie out there who'll be willing to oppose me on this.
For me tests facilitate three things:
They garantuee unbroken old functionality If I can check that
nothing new that I put in has broken
my old things by running tests, it's
a good thing.
They make me feel secure when I rewrite old stuff The code I
refactor is very rarely the trivial
one. If, however, I want to refactor
untrivial code, having tests to
ensure that my refactorings have not
broken any behavior is a must.
They are the documentation of my work Untrivial code needs to be
documented. If, however, you agree
with me that comments in code is the
work of the devil, having clear and
concise unit tests that make you
understand what the correct behavior
of something is, is (again) a must.
Anything I'm sure I won't break, or that I feel is unnessecary to document, I simply don't waste time testing. Your generated controllers and model methods, then, I would say are all fine even without unit tests.
The only absolute rule is that testing should be cost-efficient.
Any set of practical guidelines to achieve that will be controversial, but here are some advices to avoid tests that will be generally wasteful, or do more harm than good.
Unit
Don't test private methods directly, only assess their effects indirectly through the public methods that call them.
Don't test internal states
Only test non-trivial methods, where different contexts may get different results (calculations, concatenation, regexes, branches...)
Don't assess things you don't care about, e.g. full copy on some message or useless parts of complex data structures returned by an API...
Stub all the things in unit tests, they're called unit tests because you're only testing one class, not its collaborators. With stubs/spies, you test the messages you send them without testing their internal logic.
Consider private nested classes as private methods
Integration
Don't try to test all the combinations in integration tests. That's what unit tests are for. Just test happy-paths or most common cases.
Don't use Cucumber unless you really BDD
Integration tests don't always need to run in the browser. To test more cases with less of a performance hit you can have some integration tests interact directly with model classes.
Don't test what you don't own. Integration tests should expect third-party dependencies to do their job, but not substitute to their own test suite.
Controller
In controller tests, only test controller logic: Redirections, authentication, permissions, HTTP status. Stub the business logic. Consider filters, etc. like private methods in unit tests, tested through public controller actions only.
Others
Don't write route tests, except if you're writing an API, for the endpoints not already covered by integration tests.
Don't write view tests. You should be able to change copy or HTML classes without breaking your tests. Just assess critical view elements as part of your in-browser integration tests.
Do test your client JS, especially if it holds some application logic. All those rules also apply to JS tests.
Ignore any of those rules for business-critical stuff, or when something actually breaks (no-one wants to explain their boss/users why the same bug happened twice, that's why you should probably write at least regression tests when fixing a bug).
See more details on that post.
More coverage is better for code quality- but it costs more. There's a sliding scale here, if you're coding an artificial heart, you need more tests. The less you pay upfront, the more likely it is you'll pay later, maybe painfully.
In the example, full_name, why have you placed a space between, and ordered by first_name then last_name- does that matter? If you are later asked to sort by last name, is it ok to swap the order and add a comma? What if the last name is two words- will that additional space affect things? Maybe you also have an xml feed someone else is parsing? If you're not sure what to test, for a simple undocumented function, maybe think about the functionality implied by the method name.
I would think your company's culture is important to consider too. If you're doing more than others, then you're really wasting time. Doesn't help to have a well tested footer, if the main content is buggy. Causing the main build or other developer's builds to break, would be worse though. Finding the balance is hard- unless one is the decider, spend some time reading the test code written by other team members.
Some people take the approach of testing the edge cases, and assume the main features will get worked out through usage. Considering getter/setters, I'd want a model class somewhere, that has a few tests on those methods, maybe test the database column type ranges. This at least tells me the network is ok, a database connection can be made, I have access to write to a table that exists, etc. Pages come and go, so don't consider a page load to be a substitute for an actual unit test. (A testing efficiency side note- if having automated testing based on the file update timestamp (autotest), that test wouldn't run, and you want to know asap)
I'd prefer to have better quality tests, rather than full coverage. But I'd also want an automated tool pointing out what isn't tested. If it's not tested, I assume it's broken. As you find failure, add tests, even if it's simple code.
If you are automating your testing, it doesn't matter how long it takes to run. You benefit every time that test code is run- at that point, you know a minimum of your code's functionality is working, and you get a sense of how reliable the tested functionality has been over time.
100% coverage shouldn't be your goal- good testing should be. It would be misleading to think a single test of a regular expression was accomplishing anything. I'd rather have no tests than one, because my automated coverage report reminds me the RE is unreliable.
The primary benefit you would get from writing a unit test or two for this method would be regression testing. If, sometime in the future, something was changed that impacted this method negatively, you would be able to catch it.
Whether or not that's worth the effort is ultimately up to you.
The secondary benefit I can see by looking at it would be testing edge cases, like, what it should do if last_name is "" or nil. That can reveal unexpected behavior.
(i.e. if last_name is nil, and first_name is "John", you get full_name => "John ")
Again, the cost-vs-benefit is ultimately up to you.
For generated code, no, there's no need to have test coverage there because, as you said, you didn't write it. If there's a problem, it's beyond the scope of the tests, which should be focused on your project. Likewise, you probably wouldn't need to explicitly test any libraries that you use.
For your particular method, it looks like that's the equivalent of a setter (it's been a bit since I've done Ruby on Rails) - testing that method would be testing the language features. If you were changing values or generating output, then you should have a test. But if you are just setting values or returning something with no computation or logic, I don't see the benefit to having tests cover those methods as if they are wrong, you should be able to detect the problem in a visual inspection or the problem is a language defect.
As far as the other methods, if you write them, you should probably have a test for them. In Test-Driven Development, this is essential as the tests for a particular method exist before the method does and you write methods to make the test pass. If you aren't writing your tests first, then you still get some benefit to have at least a simple test in place should this method ever change.
I'm starting a Rails app for a customer and am considering either creating a mind map or jumping straight to a Cucumber specification.
How do you plan your Rails app?
As an additional question, say you also start with Cucumber, at which point would you write Unit tests? Before satisfying the specifications?
I've got a 6 step process.
I prefer to work out the model relationship and uses before doing anything. Generally I try to define models into units containing coherent chunks of information. Usually this starts by identifying the orthogonal resources my application will need (Users, Posts, etc). I then figure out what information each of those resources absolutely need (attributes) and may potentially need (associations), and how that information will likely be operated on (methods), from there I define a set of rules to govern resource consistency (validations).
I usually iterate over my design a few times because the act of defining other models usually makes me rethink ones I've already done. Once I have a model design I like, I will start refactoring or specializing(subclassing) models to clarify the design.
I write the migration and make skeletons for my models. I usually won't write tests until I have a first draft of methods and validations implemented. It's not always obvious how to implement things until giving it some moderate thought.
Next comes the test suite. Doesn't matter what I used to write the tests, so long as I can be certain the backend is sane.
This is when I piece together the control flow. What happens on a successful request? Unsuccessful request? Which controller actions will link to others? Usually there is a 1-1 mapping between controllers and models (not counting sub classes of models), every so often I'll encounter situations where I need to act on multiple model types, for that I'll probably create a new controller. Depending on how complex my app is I may model the flow as a state machine.
Lastly I create the views. I start by sketching out the UI based which is heavily influenced by my model's relationships and attributes. Abstract out common parts, then write the views.
Polish the UI. I create a CSS, and start to replace links with remote calls, or even just javascript when appropriate.
I may interleave steps 2 and 3. I find it's very easy to write a test just after I write the code to be tested. Especially because I'm usually testing things in a console as I write, and half the test is written by pasting from the console.
I may also compartmentalize steps 4 and 5 for each model/controller. Any point I may go back and revise, a previous decision, and propagate those changes through my steps.
I start with sketches of the user interface and then progress to HTML mockups. Once the UI design is finalised I can identify the RESTful resources in the application and their relationships.
I don't think writing only cucumber features as specifications is a good idea. Writing test code without be able to test it pass leads to errors in the tests and increases the time you'll need to correct them later.
So I'd do the following :
Write some mindmap. But keep it simple with the major ideas of the project.
Start writing tests and coding at the sime time (write one test, make it pass, write an other, ...).
So you'll write your specifications while driving your application. Keeping it clean but also remaining agile and being able to change some ideas in the middle of the project.
Is it necessary to unit test ActiveRecord validations or they are well-tested already and hence reliable enough?
Validations per se should be trustable, but you may want to check if the validation is present.
Put in other words, a good way to test something is as if it were a black box, abstracting the tests from the implementation, so for instance you may have a test that checks that a person model can't be saved without a name, but don't care about how the Person class performs that validation.
It should be sufficient to accept that libraries such as ActiveRecord are better-tested by the developers than they ever will be by you: for them it's a primary concern, for you it's at best tangential.
That's not to say there won't be bugs - I found a small one the MS SQL Server adapter once a long time ago - but the kind of test you're likely to be implementing is highly unlikely to expose them as they're most likely to be edge cases. If you do find a bug, of course, it's probably very helpful if you report it with a test case that exposes it!
I would only test ActiveRecord internals if I was seeking to understand better a particular aspect that the library implements. I would not include those exploratory tests in any application project, since they're not really relevant to the project.
In general, you should write tests for code that you write yourself: if you live or try to live in a TDD world, the tests should be written before. If your models have validation rules then you should almost certainly write tests to ensure the rules are present. In most cases, the tests will be trivial, but they'll really be useful if a line inadvertently gets deleted some time in the future...
As Mike wrote, at the very least you should test that the validation exists. It's just a bit of double-entry accounting (sanity check) that is easy enough to do.
Depending on the situation, you should also test that your model is valid or invalid under particular circumstances. For example if your field requires a certain format, then test the example formats that are valid and those that aren't. It's much easier to see what this means by reading a few examples in your tests:
class Person < ActiveRecord::Base
validates_format_of :email,
:with => /\A([^#\s]+)#((?:[-a-z0-9]+\.)+[a-z]{2,})\Z/i
end
Yes, the validations are well-tested and reliable enough. But your correct use of the validations is what you want to verify.
As a side note, Ryan Bigg's blog post has_and_belongs_to_many double insert mentions someone encountering a bug in ActiveRecord (not validation related, though). As he points out, don't assume Rails can't possibly have a bug, because we know there are 900 open tickets for Rails.
But yes, the main reason you'd write a test is to check that your use of ActiveRecord is correct.