Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 months ago.
Improve this question
Wondering if testing of a relations in Rails widely considered to be a best practice or not.
We have disagreement on the team on testing relations.
IMO it's redundant, i.e. to have each has_many :people to be tested with should have_many(:people) as it looks repetitive, just copy-paste from model to a test - i.e. I cannot imagine it stop working or some other workflow break it, the only way to break it - is just remove or change the line (but if someone just changing code randomly that someone can also just remove the test as well). Moreover it doesn't actually test relation (like if Company returns people who have appropriate company_id and if company destroyed - people are actually destroyed - as it can actually fail in case of destroy validation or foreign key) but it just test that relation is specified.
And we don't test like that class respond to every single method but if we do test - we test what method does.
As we don't test other static things, like templates (like simple HTML with no logic) - we assume rails will generate specified HTML, it's impossible to break unless someone just change it.
At the other hand there is an argument can be made that relations are extremely important part of the app and in effort to have 100% coverage that also should be tested.
Usually in case of disagreements we look to best practices but I cannot find any mention of relation tests - neither to do it or not.
Can you please help me to find out is it common practice or not. (links to best practices that says about that or similar thing or your experience working with those)
Thank you in advance
You've asked an opinion-based question that is hard to answer with sources, so you might need to rethink your question. However I'll give it a try.
I don't know about best practices, but in my opinion anything that can be "fat-fingered" should be tested, relationships/associations included. Now you want to find out how to avoid testing Rails logic, and instead just test that the relation was setup, but that's not tough to do.
Minitest suites I contribute to all have tests for the existence of expected relationships/associations.
Again, the idea is that if someone accidentally removes or adds a character or two to that line of code, there is a specific test to catch it. Not only that, but removing the line of code deliberately should also include removing a test deliberately.
Yes of course if someone wants to remove that line of code completely, and go remove a test, they should be able to do that. And at that point the assumption is that the entire task was deliberate. So that's not an argument to avoid testing in my opinion. The test is there to catch accidental mistakes, not deliberate actions.
Additionally, just because the test seems like copy/paste or repetition, that's also not a reason to avoid it in my opinion. The better the application code is, the more all tests will start to look repetitive or like copy/paste boilerplate. That's actually a good thing. It means the application code does just one small thing (and likely does it well). The more repetitive tests get, the easier they are to write, the more likely they are to be written, and the more you can refactor, simplify, and DRY them up as well.
In my opinion, this should be a best practice and I've not had much push-back from any of about a dozen other Rails developers I've worked with personally. And on a wider scale, the fact that shoulda-matchers have specific matchers for this means there are enough other developers out there wanting this capability.
Here is a minitest example of testing a relationship/association without testing Rails logic itself:
test 'contains a belongs_to relationship to some models' do
expected = [:owner, :make].sort
actual = Car.reflect_on_all_associations(:belongs_to).map(&:name).sort
assert_equal(expected, actual)
end
To your point of the fact that it doesn't test the actual behavior of the code, and only tests that the relationship was defined when you'd expect a method written on a model to test the actual behavior of the method itself, not just that is was defined...
That's because you as an end-developer wrote that model method, so its behavior should be tested. But you do not want to test logic existing in the Rails core, as the Rails team has already written the tests for that.
Said another way, it makes perfect sense not to test the functionality of the association, but only test that it is defined, because the functionality is tested already by the Rails test suite.
In our company we don't test Rails internal logic.
We don't check that Rails handle has_many, belongs_to etc. correctly.
Thats Rails intern stuff you shouldn't have to bother about.
Normally you have more than enough other stuff to test.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm doing a contract job in Ruby on Rails and it is turning into a disaster for productivity largely because of writing tests. This is my first time using TDD in practice and it hasn't gone well because I am spending so much time writing tests that I've hardly gotten any results to show. I'm thinking that perhaps I'm trying to test too much by writing tests for every feature in each model and controller.
If I can't aim for 100% test coverage, what are some criteria that I could use to determine "is this feature worth testing"? For example, would integration tests trump unit tests?
If you're just getting started with testing in the ruby or Rails world, I'd make the following suggestions:
Start with rspec. Automated acceptance/integration testing with a tool like Cucumber can be a large timesink for a single developer who has never used it before. Success with those tools is often contingent upon A) quality UI specs that are very very specific, B) UI conventions which are easily testable with headless browser emulators and C) Familiarity with the tools ahead of go time.
Test individual methods. Test that they return the values you expect. Test that when you feed them bad data, they respond in an appropriate manner. Test any edge cases as you become aware of them.
Be very careful that you are stubbing and mocking correctly in your tests. It's easy to write a test for 30 minutes only to discover that you're not really testing the thing you need to be testing.
Don't go overboard with micro-managing your TDD - some folks will tell you to test every tiny step in writing a method: first test that the model has a method called 'foo', then test whether it returns non-nil, then test that it returns a string, then test that the string contains a certain substring. While this approach can be helpful when you're implementing something complex, it can be a time sink as well. Just skip the first two steps. That being said, it's easy to go too far in the other direction, specifying a method's behavior with a complex test before you begin implementing it, then beginning the implementation only to find you've botched the test.
Don't write tests that just say 'this is how i wrote the feature, don't change it'. Tests should reflect the essential business logic of a method. If you are writing tests specifically so that they will fail if another developer changes some non-critical part of your implementation, you are wasting time and adding superfluous lines of code.
These are just a few observations I've made from having been in similar situations. Best of luck, testing can be a lot of fun! No, really. I mean it.
100% test coverage is a fantasy and a waste of time. Your tests should serve a purpose, typically to give you confidence that the code you wrote works. Not absolute confidence, but some amount of confidence. TDD should be a tool, not a restriction.
If it's not making your work come out better, why are you doing it? More importantly, if you fail to produce useful code and lose the contract, those tests weren't too useful after all were they? It's a balance, and it sounds like you're on the wrong side.
If you're new to Rails, you can get a small dose of its opinionated creator's view on testing in this 37signals blog article on the topic. Small rules of thumb, but maybe something to push you in a new direction on the subject.
There are also good references on improving your use of RSpec like betterspecs.org, The RSpec Book and Everyday Rails Testing with RSpec. Using it poorly can result in a lot of headache maintaining the specs.
My advice is to try and get your testing and your writing of code as tightly coupled as possible, combined with an Agile approach to the project.
This way you will constantly have new stuff to show the client as testing will just be baked in. The biggest mistake I see with teams that are new to testing is to continue to see the testing as a separate activity. Most of all I continue to see developers say that a feature is done... but will need some refactoring and some better tests at "some points". "Some point" rarely comes. One thing is inescapable though - at least for several months it will be much slower in the short term but much better quality and you'll avoid building the "big ball of mud" I've seem in so many larger institutions.
A few things:
Don't
Test the database
Test ActiveRecord or whatever ORM you're using
Do
For models:
Test validations
Test custom logic
For controllers:
Test non-trivial routes
Test redirects
Test authentication
Test instance variable assignment
For views:
I haven't gotten around to testing views, but I've run into situations where I wish I did. For example testing fields in forms.
More at Rails Guides
What I need is a way inside a Factory.define block to know if the factory has been called using create or build, either explicitly or simply using the default strategy.
I have a factory that has to manually adjust associations that the original author of the code took so far off the rails that normal creation barfs and normal build can be managed. I don't want to adjust those associations in the build case, but I have to in the create case.
I've been looking to see if there is something analogous to 'current_strategy' but I haven't seen anything yet. I know I can distinguish using after_create vs. after_build, but the original author made it so that the act of saving the object without doing the adjustments causes massive unhappiness--save exceptions and garbage in the database.
I currently have no mandate to fix the "models" he wrote and the existing rspec tests use the differentiation to do the right thing at any time. In every case the prior test author(s) have opted to simply never use create, which means setting up most of the test data is an arcane and lengthy process.
Any help would be deeply appreciated--I'm still exercising my GoogleFu but would love to be short circuited...
Oh, this is in Rails 2 (/cry)
thanks!
This sounds like a very strange problem indeed, but since you say that you're cleaning up someone else's code, I'll assume there's no easy way out of this.
I wouldn't approach this from the factory side. The factory shouldn't care because the model (not the factory) is supposed to be the gatekeeper of validity in terms of object structure and associations.
I would write specs that separately create and build objects, and test their associations to make sure they are correct (according to what you want the new behavior to ultimately be). Then, get those specs to pass by refactoring the models to do what you actually need them to do. This is how you clean up legacy code, and alter its behavior - write tests that will pass when the new functionality is correct, and refactor until they pass, making incremental changes with each test/refactoring.
When your new specs are passing, you're well on your way. If the previous author put in specs of their own that verify the previous behavior, then you'll have to work on figuring out which, if any, of those tests are currently valid (many of them may be, since they represent the requirements that the app currently fulfils), and removing ones that aren't.
I've been writing tests for a while now and I'm starting to get the hang of things. But I've got some questions concerning how much test coverage is really necessary. The consensus seems pretty clear: more coverage is always better. But, from a beginner's perspective at least, I wonder if this is really true.
Take this totally vanilla controller action for example:
def create
#event = Event.new(params[:event])
if #event.save
flash[:notice] = "Event successfully created."
redirect_to events_path
else
render :action => 'new'
end
end
Just the generated scaffolding. We're not doing anything unusual here. Why is it important to write controller tests for this action? After all, we didn't even write the code - the generator did the work for us. Unless there's a bug in rails, this code should be fine. It seems like testing this action is not all too different from testing, say, collection_select - and we wouldn't do that. Furthermore, assuming we're using cucumber, we should already have the basics covered (e.g. where it redirects).
The same could even be said for simple model methods. For example:
def full_name
"#{first_name} #{last_name}"
end
Do we really need to write tests for such simple methods? If there's a syntax error, you'll catch it on page refresh. Likewise, cucumber would catch this so long as your features hit any page that called the full_name method. Obviously, we shouldn't be relying on cucumber for anything too complex. But does full_name really need a unit test?
You might say that because the code is simple the test will also be simple. So you might as well write a test since it's only going to take a minute. But it seems that writing essentially worthless tests can do more harm than good. For example, they clutter up your specs making it more difficult to focus on the complex tests that actually matter. Also, they take time to run (although probably not much).
But, like I said, I'm hardly an expert tester. I'm not necessarily advocating less test coverage. Rather, I'm looking for some expert advice. Is there actually a good reason to be writing such simple tests?
My experience in this is that you shouldn't waste your time writing tests for code that is trivial, unless you have a lot of complex stuff riding on the correctness of that triviality. I, for one, think that testing stuff like getters and setters is a total waste of time, but I'm sure that there'll be more than one coverage junkie out there who'll be willing to oppose me on this.
For me tests facilitate three things:
They garantuee unbroken old functionality If I can check that
nothing new that I put in has broken
my old things by running tests, it's
a good thing.
They make me feel secure when I rewrite old stuff The code I
refactor is very rarely the trivial
one. If, however, I want to refactor
untrivial code, having tests to
ensure that my refactorings have not
broken any behavior is a must.
They are the documentation of my work Untrivial code needs to be
documented. If, however, you agree
with me that comments in code is the
work of the devil, having clear and
concise unit tests that make you
understand what the correct behavior
of something is, is (again) a must.
Anything I'm sure I won't break, or that I feel is unnessecary to document, I simply don't waste time testing. Your generated controllers and model methods, then, I would say are all fine even without unit tests.
The only absolute rule is that testing should be cost-efficient.
Any set of practical guidelines to achieve that will be controversial, but here are some advices to avoid tests that will be generally wasteful, or do more harm than good.
Unit
Don't test private methods directly, only assess their effects indirectly through the public methods that call them.
Don't test internal states
Only test non-trivial methods, where different contexts may get different results (calculations, concatenation, regexes, branches...)
Don't assess things you don't care about, e.g. full copy on some message or useless parts of complex data structures returned by an API...
Stub all the things in unit tests, they're called unit tests because you're only testing one class, not its collaborators. With stubs/spies, you test the messages you send them without testing their internal logic.
Consider private nested classes as private methods
Integration
Don't try to test all the combinations in integration tests. That's what unit tests are for. Just test happy-paths or most common cases.
Don't use Cucumber unless you really BDD
Integration tests don't always need to run in the browser. To test more cases with less of a performance hit you can have some integration tests interact directly with model classes.
Don't test what you don't own. Integration tests should expect third-party dependencies to do their job, but not substitute to their own test suite.
Controller
In controller tests, only test controller logic: Redirections, authentication, permissions, HTTP status. Stub the business logic. Consider filters, etc. like private methods in unit tests, tested through public controller actions only.
Others
Don't write route tests, except if you're writing an API, for the endpoints not already covered by integration tests.
Don't write view tests. You should be able to change copy or HTML classes without breaking your tests. Just assess critical view elements as part of your in-browser integration tests.
Do test your client JS, especially if it holds some application logic. All those rules also apply to JS tests.
Ignore any of those rules for business-critical stuff, or when something actually breaks (no-one wants to explain their boss/users why the same bug happened twice, that's why you should probably write at least regression tests when fixing a bug).
See more details on that post.
More coverage is better for code quality- but it costs more. There's a sliding scale here, if you're coding an artificial heart, you need more tests. The less you pay upfront, the more likely it is you'll pay later, maybe painfully.
In the example, full_name, why have you placed a space between, and ordered by first_name then last_name- does that matter? If you are later asked to sort by last name, is it ok to swap the order and add a comma? What if the last name is two words- will that additional space affect things? Maybe you also have an xml feed someone else is parsing? If you're not sure what to test, for a simple undocumented function, maybe think about the functionality implied by the method name.
I would think your company's culture is important to consider too. If you're doing more than others, then you're really wasting time. Doesn't help to have a well tested footer, if the main content is buggy. Causing the main build or other developer's builds to break, would be worse though. Finding the balance is hard- unless one is the decider, spend some time reading the test code written by other team members.
Some people take the approach of testing the edge cases, and assume the main features will get worked out through usage. Considering getter/setters, I'd want a model class somewhere, that has a few tests on those methods, maybe test the database column type ranges. This at least tells me the network is ok, a database connection can be made, I have access to write to a table that exists, etc. Pages come and go, so don't consider a page load to be a substitute for an actual unit test. (A testing efficiency side note- if having automated testing based on the file update timestamp (autotest), that test wouldn't run, and you want to know asap)
I'd prefer to have better quality tests, rather than full coverage. But I'd also want an automated tool pointing out what isn't tested. If it's not tested, I assume it's broken. As you find failure, add tests, even if it's simple code.
If you are automating your testing, it doesn't matter how long it takes to run. You benefit every time that test code is run- at that point, you know a minimum of your code's functionality is working, and you get a sense of how reliable the tested functionality has been over time.
100% coverage shouldn't be your goal- good testing should be. It would be misleading to think a single test of a regular expression was accomplishing anything. I'd rather have no tests than one, because my automated coverage report reminds me the RE is unreliable.
The primary benefit you would get from writing a unit test or two for this method would be regression testing. If, sometime in the future, something was changed that impacted this method negatively, you would be able to catch it.
Whether or not that's worth the effort is ultimately up to you.
The secondary benefit I can see by looking at it would be testing edge cases, like, what it should do if last_name is "" or nil. That can reveal unexpected behavior.
(i.e. if last_name is nil, and first_name is "John", you get full_name => "John ")
Again, the cost-vs-benefit is ultimately up to you.
For generated code, no, there's no need to have test coverage there because, as you said, you didn't write it. If there's a problem, it's beyond the scope of the tests, which should be focused on your project. Likewise, you probably wouldn't need to explicitly test any libraries that you use.
For your particular method, it looks like that's the equivalent of a setter (it's been a bit since I've done Ruby on Rails) - testing that method would be testing the language features. If you were changing values or generating output, then you should have a test. But if you are just setting values or returning something with no computation or logic, I don't see the benefit to having tests cover those methods as if they are wrong, you should be able to detect the problem in a visual inspection or the problem is a language defect.
As far as the other methods, if you write them, you should probably have a test for them. In Test-Driven Development, this is essential as the tests for a particular method exist before the method does and you write methods to make the test pass. If you aren't writing your tests first, then you still get some benefit to have at least a simple test in place should this method ever change.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm developing an open-source web application on top of Rails. I'd like to make my code as easy to understand and modify as possible. I'm test-driving my development with unit tests, so much of the code is "documented" through test cases (what each controller action expects as input, what instance variables are set for output, how helpers should be called, what business logic the models incorporate, etc.). And on top of that, Rails' conventions should make a lot of documentation unnecessary where my code conforms to them.
So, where's the balance between having a well-documented Rails application and trying to obey Don't Repeat Yourself? Are there any good blogs or articles with guidance on what (RDoc) documentation is really helpful in a Rails app, and what's just waste?
OK, not at all sure this is the right thing, but here's what I decided to do:
First, I thought that another Rails developer would be familiar with at least the intent of all of the code I wrote in the standard models, views, controllers directories. So, I started adding RDoc in other source files. It turns out that I'd built up a fair collection of code in lib/helpers and app/helpers, so started there. I wrote fairly typical function-level documentation for each helper method, focusing on intent and making sure I'd spelled out the things that the method and argument naming was mnemonic for. I didn't describe most of the corner cases, argument interactions, or error checking, leaving those details to a reading of each method's unit tests.
I found that while I was doing this, there were quite a few changes I made to method signatures, rather than having to document having done something stupid. (Have you read Clean Code by #unclebobmartin? I think it's great on all fronts, but especially on naming and self-documentation.) So, in addition to being an RDoc-adding exercise, I ended up spending a significant amount of time on (needed) refactorings--things that hadn't occurred to me in the refactoring pass after I'd first written the code because I didn't have enough distance yet. Perhaps 80% of the time I spent "adding RDoc" went into the code in my "helpers" directories, and the majority of that was refactoring rather than writing. So, even if nobody ever reads the RDoc itself, I think this was a valuable exercise and I'm very happy I spent the time
Next, I turned to my Controllers. I left the default single-line comments matching what scaffolding generates on each controller method as the first line of the RDoc (e.g., "# GET /"). For methods that do just the standard Rails thing, I didn't add any other documentation. I found that the unique things I'd done in my controller methods that were worth documenting had to do with what they return (e.g., data formats other than HTML), what they're for (actions beyond the standard REST model, like those intended to service Ajax requests), and whether they use a non-standard URL format. (This was really documentation of my route configuration, but since config/routes.rb isn't used to generate RDoc....) I didn't describe the behavior of my actions at all, feeling that my automated tests sufficiently covered all of the cases/behaviors someone would need to know. Finally, I added class-level comments mentioning the model class that the controller manipulates, not because people can't guess, but so that there'd be a convenient link in the generated HTML page.
Last, I worked on my models. Again, I didn't document behaviors (business logic), considering my unit tests to be sufficient here. What I did do was remind readers that field definitions are in db/schema.rb (felt silly, but if a developer new to Rails was trying to figure things out, being reminded of the base names for all the magic methods couldn't hurt). I also realized that lots of my models' behaviors were implemented through Rails' declarative helper methods called directly by the model classes (validates_..., belongs_to, etc.). Rather than trying to describe what this stuff accomplishes (after all, the desired model behavior is "described" by the tests), I just put in a reminder to look at the model source. (It's a shame RDoc isn't aware enough of Rails conventions to extract and document these things like it does Ruby constant definitions.)
And that was that. Perhaps a little more RDoc than I needed to write, but I think that it is light enough that it will get maintained as the code evolves, and it doesn't overlap at all with things "expressed" by my unit tests. Hopefully, it has filled the gap between what a Rails developer can infer from convention and what you're only going to figure out from the source. (Although I'm now noticing a growing impulse to pull more pieces from my views into helpers, even though they're not reused and it would mean losing ERB's inline HTML, just so that I can write descriptions for them. Go figure.)
My short answer: rdoc your models, since they are truly unique to your application.
But it sounds like you are building a web-application, so the argument could be made that you should rdoc other pieces, too.
Document anything that is not crystal clear from reading the code. Self-documenting code is easily achieved with Ruby, but crafty logic needs comments! Try to put yourself in the shoes of a project newbie when deciding if something needs rdoc. Odds are that if you're thinking about whether or not it needs it, it does. Lastly, I wouldn't rely on test sources to provide documentation for your application code. I know that if I jumped on a project and was left scratching my head as to why a model is behaving a certain way I wouldn't run immediately to the unit test for answers.
Is it necessary to unit test ActiveRecord validations or they are well-tested already and hence reliable enough?
Validations per se should be trustable, but you may want to check if the validation is present.
Put in other words, a good way to test something is as if it were a black box, abstracting the tests from the implementation, so for instance you may have a test that checks that a person model can't be saved without a name, but don't care about how the Person class performs that validation.
It should be sufficient to accept that libraries such as ActiveRecord are better-tested by the developers than they ever will be by you: for them it's a primary concern, for you it's at best tangential.
That's not to say there won't be bugs - I found a small one the MS SQL Server adapter once a long time ago - but the kind of test you're likely to be implementing is highly unlikely to expose them as they're most likely to be edge cases. If you do find a bug, of course, it's probably very helpful if you report it with a test case that exposes it!
I would only test ActiveRecord internals if I was seeking to understand better a particular aspect that the library implements. I would not include those exploratory tests in any application project, since they're not really relevant to the project.
In general, you should write tests for code that you write yourself: if you live or try to live in a TDD world, the tests should be written before. If your models have validation rules then you should almost certainly write tests to ensure the rules are present. In most cases, the tests will be trivial, but they'll really be useful if a line inadvertently gets deleted some time in the future...
As Mike wrote, at the very least you should test that the validation exists. It's just a bit of double-entry accounting (sanity check) that is easy enough to do.
Depending on the situation, you should also test that your model is valid or invalid under particular circumstances. For example if your field requires a certain format, then test the example formats that are valid and those that aren't. It's much easier to see what this means by reading a few examples in your tests:
class Person < ActiveRecord::Base
validates_format_of :email,
:with => /\A([^#\s]+)#((?:[-a-z0-9]+\.)+[a-z]{2,})\Z/i
end
Yes, the validations are well-tested and reliable enough. But your correct use of the validations is what you want to verify.
As a side note, Ryan Bigg's blog post has_and_belongs_to_many double insert mentions someone encountering a bug in ActiveRecord (not validation related, though). As he points out, don't assume Rails can't possibly have a bug, because we know there are 900 open tickets for Rails.
But yes, the main reason you'd write a test is to check that your use of ActiveRecord is correct.