Do you need something like Fitnesse, if you have BDD tests?
BDD "tests" exist at multiple different levels of granularity, all the way up to the initial project vision. Most people know about the scenarios. A few people remember that BDD started off with the word "should" as a replacement for JUnit's "test" - as a replacement for TDD. The reason I put "tests" in quotes is because BDD isn't really about testing; it's focused on finding the places where there's a lack or mismatch of understanding.
Because of that focus, the conversations are much more important than the BDD tools.
I'm going to say that again. The conversations are much more important than the BDD tools.
Acceptance testing doesn't actually mandate the conversations, and usually works from the assumption that the tests you're writing are the right tests. In BDD we assume that we don't know what we're doing (and probably don't know that we don't know). This is why we use things like "Given, When, Then" - so that we can have conversations around the scenarios and / or unit-level examples. (Those are the two levels most people are familiar with - the equivalent of acceptance tests and unit tests - but it goes up the scale).
We don't call them "acceptance tests" because you can't ask a business person "Please help me with my acceptance test". They'll look at you with a really weird, squinty gaze and then dismiss you as that geek girl. 93% of you don't want that.
Try "I'd like to talk to you about the scenario where..." instead. Or, "Can you give me an example?" Either of these are good. Calling them "Acceptance tests" starts making people think that you're actually doing testing, which would imply that you know what you're doing and just want to make sure you've done it. At that point, conversations tend to focus on how quickly you can get the wrong thing out, instead of about the fact you're getting out the wrong thing.
And you're getting the wrong thing out. Really, honestly, you are. Even if you think you're not, it's only because you don't understand second-order ignorance. You don't know that you don't know, and that's OK, as long as you've found the places where you could know you don't know. (You won't find all of them. Don't let the categorisation paradox keep you up at night.)
The only way to really get it right is to get all the requirements up front, and you know what happens when you try that. That's right. It's Waterfall. Remember the overtime? The weekend work? The seven years in which not one thing you created made it to production? If you want to avoid that, you only have one chance: assume you're wrong, have some conversations about it to be less wrong, then accept that you're still wrong and go for it anyway. Writing tests too early means you have even more chance to be wrong, and now it's harder to change and everyone thinks you're right and the PM is measuring your velocity and now you're committed to being wrong for another 2 weeks. And - worse - you're about to test that you're wrong, too.
Once again. The conversations are much more important than the BDD tools.
Please, please, don't fixate on the tools. The tools are just a mechanism for capturing the conversations and making sure that they get played into the code. Scenarios are not a replacement for conversations, any more than a 3 x 5 index card is a replacement for requirements.
Having said that, if you must start with a tool, put Slim behind Fitnesse so that it can run lovely Given / When / Thens without having to mess with Fit's tables and fixtures. GivWenZen is based on Slim and either of them rocks. FitSharp is the equivalent for those of you in the .NET space. Or just use Cucumber, or SpecFlow, or knock up a little custom DSL* that will do the job fine for years.
Transparency: *I wrote that one. And bits of JBehave. I wish we had called it "Dont-concentrate-on-BDD-tools-Behave". I might be heavily involved in other bits of BDD. Plus Dan North will buy me a pint if I can get this message out, so it's not exactly impartial advice.
Regardless - have the conversations already. It's just people. Go talk.
I don't know if there's such a thing, strictly speaking, as a "BDD test". BDD is a philosophy that suggests how you can best interact and collaborate with stakeholders to complete a complex project. It doesn't directly make any prescriptions for the best way to write tests. In other words, you'll probably still have all the usual kinds of tests (including acceptance tests) under a BDD-philosophy project.
When you hear of "BDD frameworks", the speaker usually means a framework for writing all your usual kinds of tests but with a BDD twist. For example, in RSpec, you still write unit tests; you just add the BDD flavor to them.
While BDD is larger than the scope of just tests, there are indeed BDD tests. These tests are Unit Tests that follow the BDD language.
Given some initial context (the givens),
When an event occurs,
then ensure some outcomes.
There are a few good BDD frameworks available depending on your language of preference.
JBehave for Java
RSpec for Ruby
NBehave for .NET
I like to draw a distinction between "specs" and "tests."
If I am covering a method called GetAverage(IEnumerable<int> numbers), I am going to write a more or less standard unit test.
If I am covering a method called CalculateSalesTax(decimal amount, State state, Municipality municipality), I am still going to write the unit test, but I'll call it a specification because I'm going to change it up (1) to verify the behaviour of the routine, and (2) because the test itself will document both the routine and its acceptance criteria.
Consider:
[TestFixture]
public class When_told_to_calculate_sales_tax_for_a_given_state_and_municipality() // the name defines the context
{
// set up mocks and expected behaviour
StateSalesTaxWebService stateService
= the_dependency<IStateSalesTaxWebService>;
MunicipalSurchargesWebService municipalService
= the_dependency<IMunicipalSurchargesWebService>;
stateService.Stub(x => x.GetTaxRate(State.Florida))
.Return(0.6);
municipalService.Stub(x => x.GetSurchargeRate(Municipality.OrangeCounty))
.Return(0.05);
// run what's being tested
decimal result = new SalesTaxCalculator().CalculateSalesTax
(10m, State.Florida, Municipality.OrangeCounty);
// verify the expected behaviour (these are the specifications)
[Test]
public void should_check_the_state_sales_tax_rate()
{
stateService.was_told_to(x => x.GetTaxRate(State.Florida)); // extension methods wrap assertions
}
[Test]
public void should_check_the_municipal_surcharge_rate()
{
municipalService.was_told_to(x => x.GetSurchargeRate(Municipality.OrangeCounty));
}
[Test]
public void should_return_the_correct_sales_tax_amount()
{
result.should_be_equal_to(10.65m);
}
}
JBehave (and NBehave recently added the same support) work with regular test files so while many other frameworks add "BDD taste tounit tests" the text based behaviour specifications/examples created with JBehave are suitable for acceptance tests. And no, you don't need fitnesse for that.
To get an idea of how it works I suggest JBehaves 2min tutorial.
For BDD testing in Flex you can try GivWenZen-flex check it out http://bitbucket.org/loomis/givwenzen-flex.
Cheers,
Kris
xBehavior BDD tests implemented well are robo-driven user acceptance criteria.
xSpecification BDD tests are normally unit tests and are unlikely to be acceptable user acceptance criteria.
Related
I am all confused with TDD vs BDD :)
How does TDD and BDD differ in each of below point?
Development: Test case first, development follows next
RestService(HTTP): Don't make rest calls? If so,
a) do we return only hardcoded json using a mock object?
b) how to handle REST call failures? We should have test case for that too?
Especially for item 2, i have googled so many articles, but couldn't find a sample (code) approach on how to handle rest calls.
BDD and TDD are not comparable to each other, although they are both used in test first development.
BDD is more than just writing tests with an English-like syntax, e.g. Kiwi. BDD (also known as ATDD—Acceptance Test Driven Development) starts with developers, QA, and designers (e.g. business, and interaction designers), working together to develop a shared understanding of the proposed solution. It is common to use examples to illustrate the behavior, also known as Specification by Example.
I have found that a useful way to think of abstraction is distinguishing between what you do (abstract, high-level policy), and how you do it (concrete, low-level details). Every concrete detail exists to fulfill a higher-level policy. When you see something concrete, it is beneficial to identify the policy it is serving.
The specification by example can be used to create high-level acceptance tests, which test what the application does, i.e. its behavior.
Unit tests are used to test how the app implements a solution, i.e. test that the appropriate messages are sent to its collaborators/dependencies at the appropriate time.
The phases of the standard TDD cycle are Red, Green, Refactor. During the green phase, your goal is to get the test passing as quickly as possible, by hook or by crook—it is acceptable to write ugly, unorganized code. Once the test passes, you refactor the code to make it more readable/changeable.
Similarly, with a BDD/ATDD cycle, you have Red, Green, Refactor. During the green phase of BDD, just get the acceptance test to pass. All of the code you write can exist within the test itself. During the refactor phase of BDD, you extract test code into production code. You can use TDD to guide the extraction.
So, for a given BDD acceptance test, you might have multiple TDD tests.
Regarding how to test REST calls, let's go back to the premise of abstraction—distinguishing what we do from how we do it.
Calling a REST service is a concrete action. The policy it satisfies may be to provide a list of model objects.
Let's say the use case you are implementing is to invite a friend to lunch. Part of the use case responsibility is to obtain the list of friends from a server; it doesn't care how the server finds the friends.
Your BDD tests would handle getting the list of friends, picking a friend, and completing the invitation. Your BDD tests would not worry about actually making REST calls.
When you use TDD to implement the the class that handles communication with the server, you could have tests that retrieve JSON from a remote data source (i.e. the server), and ensure the JSON is properly parsed into User model objects. You could also have tests to cover the data source responding with an error, etc.
At the point you actually make a REST call, in the implementation of a remote data source that uses REST to communicate with the backend server, I would classify that as an integration test, as you are testing the integration with a component you don't control, i.e. the actual backend server. The integration tests only need to confirm that the server returns JSON data in the format your app expects, or that errors are returned when appropriate.
BDD is actually derived from TDD, so it's not surprising there's a little confusion! BDD is exactly like TDD (or ATDD if you're doing it for a whole system), but without the word "Test". It turns out that can be pretty powerful.
Particularly, it lets developers have conversations with non-technical business people about what the system should do. You can also use it to have conversations about what a class should do, or a module of code should do, even with a technical expert.
So in the example of your REST service, you can imagine that I'm a dev and you're an expert who knows what the REST service should do.
Me: What should it do?
You: It should let me read a record.
Me: Great! Can you give me an example of a record?
You: I have one here...
Me: Is there any context in which someone shouldn't be able to read the record?
You: Sure, if they don't have permissions.
...
Me: Okay, so I've done Read, let's do Update. Can you give me an example of a typical update?
You: Here you go.
Me: Fantastic, and you want it to respond just with success or fail. Is there any scenario in which it should fail?
You: Sure. The record shows when it was last updated. If someone else has already updated it in the meantime, yours should fail when you submit it.
So you see you can use BDD to explore all kinds of scenarios, including those around a REST service. The trick is to ask, "Can you give me an example?" Then you get a concrete example, which you can then automate if you want to. The conversations help us look for other examples and scenarios which we might have missed.
Don't use BDD tools to automate for a technical audience! BDD tools like Cucumber, JBehave etc. work with real English that's a lot harder to refactor than code. Use JUnit, NUnit etc. if you're just doing something like a REST service. You can put "Given, When, Then" in comments, or make a little DSL.
So now you can see that with your REST call failure, if I were coding it, I'd have an example like:
Me: So, this call failure... can you give me an example?
You: Sure, if you access a record that's been deleted it's going to fail.
Me: Give me a typical example of a record that might get deleted?
You: The one we're using before is good.
Me: Okay, is there a situation in which we shouldn't delete a record?
You: Yes, if it's already been published...
Etc.
You can see that throughout, I'm not really using the word "test". Tests are a nice by-product in BDD. It's used more for exploration and specification of requirements. The conversations in BDD are the most important part of it.
The reason it's tricky to find examples of using BDD for REST is first because REST is deliberately simple and doesn't often have a lot of behaviour, and second because BDD's scenarios aren't generally phrased in terms of their implementation, focusing instead on the value of what the service or system provides ("read a record").
TDD and ATDD are exactly the same, if they're done well. It's just easier to have conversations about examples and scenarios than it is to have them about tests.
Suppose I have a user class that has a method is_old_enough?
The method just checks that the age is over 18. Seems pretty simple.
Does TDD mean I have to write a test for this, even though it is trivial?
class User
def is_old_enough?
self.age >= 18
end
end
And if so, why? What is the benefit of writing a test for this? You'd just be testing that x >= y works the way you expect the >= operator to work.
Because the most likely scenario I see happening is the following:
It turns out the age should actually be 21. That's a bug that the test didn't catch, because they had the wrong assumptions when we wrote the code. So then they go in and change the method to >= 21. But now the test fails! So they have to go and change the test too. So the test didn't help, and actually gave a false positive alarm when they were refactoring.
Looks like tests for simple methods like this are not actually testing anything useful, and are actually hurting you.
I think you're confusing test coverage and Test-Driven Development. TDD is just a practice of developing an automated test that is going to verify the use cases of some new feature. Usually it starts off failing because you've stubbed the functionality out or simply haven't implemented it. Next, you develop the feature until the test passes.
The idea is that you are in the mindset of developing tests that verify your important use cases/features. This doesn't necessarily mean you need to test simple functions if you think they are covered by your regular feature tests.
In terms of coverage, that's really up to you as the developer (or team) to decide. Obviously having around 1-to-1 coverage of tests to API is desired, but you have a choice as to whether you think it's always going to be easy enough to implement is_old_enough?. It may seem like an easy implementation now, but perhaps that will change in the future. You just need to be mindful of those kinds of decisions when you choose whether to write a test or not. More than likely, though, your use case won't change and the test is easy to write. It doesn't hurt to feel confident in all areas of your code.
I think the question has more to do with unit testing than TDD in particular.
Short answer: focus on behaviors
Long answer: Well, there is a nice phrase out there: BDD is TDD done right, and I completely agree. While BDD and TDD are is large part the "same" thing (not equal, mind you!), BDD for me gives the context for doing TDD. You can read a lot on the Internet around this, so I will not write essays here, but let me say this:
In your example, yes, the test is necessary because the rule that
user is old enough is a behavior of User entity. Test serves as a
safety net for many other things yet to come which would rely on this
piece of information, and test for me would document this behavior
very well (I actually tend to read tests to find out what the developer had in mind when writing a class - you learn what to expect, how the class behaves, what are the edge cases etc.)
I don't really see how the test would not catch the refactoring, since I would write the test with numbers 18, 19, 25 and 55 in mind (just a bunch of asserts typed very fast very easily)
Very important piece of the puzzle is that unit tests are just one technique that you need. If your design is lacking, you will find yourself writing too many meaningless tests, or you will have hell testing classes doing multiple things etc. You need to have very good SOLID skills to be able to shape out classes in a way that testing only their public interfaces (this includes protected methods as well) actually tests the entire class. As said before, focusing on behaviors is the key here.
I have a fairly large MVC3 application, of which I have developed a small first phase without writing any unit tests, targeted especially detecting things like regressions caused by refactoring. I know it's a bit irresponsible to say this, but it hasn't really been necessary so far, with very simple CRUD operations, but I would like to move toward a TDD approach going forward.
I have basically completed phase 1, where I have written actions and views where members can register as authors and create course modules. Now I have more complex phases to implement where consumers of the courses and their trainees must register and complete courses, with academic progress tracking, author feedback and financial implications. I feel it would be unwise to proceed without a solid unit testing strategy, and based on past experience I feel TDD would be quite suitable to my future development efforts here.
Are there any known procedures for 'converting' a development effort to TDD, and for introducing unit tests to already written code? I don't need kindergarten level step by step stuff, but general strategic guidance.
BTW, I have included the web-development and MVC tags on this question as I believe these fields of development can significant influence on the unit testing requirements of project artefacts. If you disagree and wish to remove any of them, please be so kind as to leave a comment saying why.
I don't know of any existing procedures, but I can highlight what I usually do.
My approach for an existing system would be to attempt writing tests first to reproduce defects and then modify the code to fix it. I say attempt, because not everything is reproducible in a cost effective manner. For example, trying to write a test to reproduce an issue related to CSS3 transitions on a very specific version of IE may be cool, but not a good use of your time. I usually give myself a deadline to write such tests. The only exception may be features that are highly valued or difficult to manually test (like an API).
For any new features you add, first write the test (as if the class under test is an API), verify the test fails and implement the feature to satisfy the test. Repeat. When you done with the feature, run it through something like PEX. It will often highlight things you never thought of. Be sensible about which issues to fix.
For existing code, I'll use code coverage to help me find features I do not have tests for. I comment out the code, write the test (which fails), uncomment the code, verify test passes and repeat. If needed, I'll refactor the code to simplify testing. PEX can also help.
Pay close attention to pain points, as it highlights areas that should be refactored. For example, if you have a controller that uses ObjectContext/IDbCommand/IDbConnection directly for data access, you may find that you require a database to be configured etc just to test business conditions. That is my hint that I need an interface to a data access layer so I can mock it and simulate those business conditions in my controller. The same goes for registry access and so forth.
But be sensible about what you write tests for. The value of TDD diminishes at some point and it may actually cost more to write those tests than it is to give it to someone in India to manually test.
We are running a pretty big website, we have some critical legacy code there we want to have well covered.
At the same time we would like to have a report of the features we are currently supporting and covered. And also we want to be sure we really cover every possible corner case. Some code paths are critical and will need many more tests even after achieving 100% coverage.
As we are already using rspec and rspec has "feature" and "scenario" keywords, we tried to make a list using rspec rather than going for cucumber but I think this question can be applied to any testing tool.
We want something like this:
feature "each advertisement will be shown a specified % of impressions"
scenario "As ..."
This feature is minimal from the point of view of managers but huge in the code. It involves a backend tool, a periodic task, logic in the models and views in backend and front end.
We tried to divide it like this:
feature "each creative will be shown a specified % of impressions"
context "configuration"
context "display"
scenario "..."
context "models"
it "should ..."
context "frontend"
context "display"
scenario "..."
context "models"
it "should ..."
Configuration takes place in another tool, display would contain integration tests and models would contain unit test.
I repeat myself but the idea is sto assure that the feature is really finished(including building the configuration tool) and 100% tested.
But looking at this file, it is not integration, nor unit test not even belong to any particular project.
Definitely there should be a better way of managing this.
Any experiences, resources, ideas you can share to guide us ?
The scenario you're describing is a huge reason why BDD is so popular. It forces you to write code in a way that's easy to test. Having said that, you're obviously not going to go back and rewrite the entire legacy application. There are a few things you should consider though:
As you go through each section of the application, you should ask yourself 'Will it be harder to refactor than to write tests for this?'. Sometimes refactoring before writing tests just cannot be avoided.
Testing isn't about 100% coverage, it's about 100% confidence. As you mentioned, you plan on writing more tests even when you have 100% coverage. This is because you're going for confidence. Once you're confident in a piece of code, move on. You can always come back to it at a later time.
From my experience, Cucumber is easier for tests that cover a large portion of the application. I think the reason for this is that writing out the tests in plain english makes you think of things you wouldn't have otherwise. It also allows you to focus on the behavior instead of the code and can make refactoring a less daunting task.
You don't really get much out of adding tests to existing code if you never touch that code again. Start with testing the code you want to make changes (i.e. refactor) to first.
I also recommend the book Rails Test Prescriptions, specifically one of the last chapters called "Testing a Legacy Application".
I've been writing tests for a while now and I'm starting to get the hang of things. But I've got some questions concerning how much test coverage is really necessary. The consensus seems pretty clear: more coverage is always better. But, from a beginner's perspective at least, I wonder if this is really true.
Take this totally vanilla controller action for example:
def create
#event = Event.new(params[:event])
if #event.save
flash[:notice] = "Event successfully created."
redirect_to events_path
else
render :action => 'new'
end
end
Just the generated scaffolding. We're not doing anything unusual here. Why is it important to write controller tests for this action? After all, we didn't even write the code - the generator did the work for us. Unless there's a bug in rails, this code should be fine. It seems like testing this action is not all too different from testing, say, collection_select - and we wouldn't do that. Furthermore, assuming we're using cucumber, we should already have the basics covered (e.g. where it redirects).
The same could even be said for simple model methods. For example:
def full_name
"#{first_name} #{last_name}"
end
Do we really need to write tests for such simple methods? If there's a syntax error, you'll catch it on page refresh. Likewise, cucumber would catch this so long as your features hit any page that called the full_name method. Obviously, we shouldn't be relying on cucumber for anything too complex. But does full_name really need a unit test?
You might say that because the code is simple the test will also be simple. So you might as well write a test since it's only going to take a minute. But it seems that writing essentially worthless tests can do more harm than good. For example, they clutter up your specs making it more difficult to focus on the complex tests that actually matter. Also, they take time to run (although probably not much).
But, like I said, I'm hardly an expert tester. I'm not necessarily advocating less test coverage. Rather, I'm looking for some expert advice. Is there actually a good reason to be writing such simple tests?
My experience in this is that you shouldn't waste your time writing tests for code that is trivial, unless you have a lot of complex stuff riding on the correctness of that triviality. I, for one, think that testing stuff like getters and setters is a total waste of time, but I'm sure that there'll be more than one coverage junkie out there who'll be willing to oppose me on this.
For me tests facilitate three things:
They garantuee unbroken old functionality If I can check that
nothing new that I put in has broken
my old things by running tests, it's
a good thing.
They make me feel secure when I rewrite old stuff The code I
refactor is very rarely the trivial
one. If, however, I want to refactor
untrivial code, having tests to
ensure that my refactorings have not
broken any behavior is a must.
They are the documentation of my work Untrivial code needs to be
documented. If, however, you agree
with me that comments in code is the
work of the devil, having clear and
concise unit tests that make you
understand what the correct behavior
of something is, is (again) a must.
Anything I'm sure I won't break, or that I feel is unnessecary to document, I simply don't waste time testing. Your generated controllers and model methods, then, I would say are all fine even without unit tests.
The only absolute rule is that testing should be cost-efficient.
Any set of practical guidelines to achieve that will be controversial, but here are some advices to avoid tests that will be generally wasteful, or do more harm than good.
Unit
Don't test private methods directly, only assess their effects indirectly through the public methods that call them.
Don't test internal states
Only test non-trivial methods, where different contexts may get different results (calculations, concatenation, regexes, branches...)
Don't assess things you don't care about, e.g. full copy on some message or useless parts of complex data structures returned by an API...
Stub all the things in unit tests, they're called unit tests because you're only testing one class, not its collaborators. With stubs/spies, you test the messages you send them without testing their internal logic.
Consider private nested classes as private methods
Integration
Don't try to test all the combinations in integration tests. That's what unit tests are for. Just test happy-paths or most common cases.
Don't use Cucumber unless you really BDD
Integration tests don't always need to run in the browser. To test more cases with less of a performance hit you can have some integration tests interact directly with model classes.
Don't test what you don't own. Integration tests should expect third-party dependencies to do their job, but not substitute to their own test suite.
Controller
In controller tests, only test controller logic: Redirections, authentication, permissions, HTTP status. Stub the business logic. Consider filters, etc. like private methods in unit tests, tested through public controller actions only.
Others
Don't write route tests, except if you're writing an API, for the endpoints not already covered by integration tests.
Don't write view tests. You should be able to change copy or HTML classes without breaking your tests. Just assess critical view elements as part of your in-browser integration tests.
Do test your client JS, especially if it holds some application logic. All those rules also apply to JS tests.
Ignore any of those rules for business-critical stuff, or when something actually breaks (no-one wants to explain their boss/users why the same bug happened twice, that's why you should probably write at least regression tests when fixing a bug).
See more details on that post.
More coverage is better for code quality- but it costs more. There's a sliding scale here, if you're coding an artificial heart, you need more tests. The less you pay upfront, the more likely it is you'll pay later, maybe painfully.
In the example, full_name, why have you placed a space between, and ordered by first_name then last_name- does that matter? If you are later asked to sort by last name, is it ok to swap the order and add a comma? What if the last name is two words- will that additional space affect things? Maybe you also have an xml feed someone else is parsing? If you're not sure what to test, for a simple undocumented function, maybe think about the functionality implied by the method name.
I would think your company's culture is important to consider too. If you're doing more than others, then you're really wasting time. Doesn't help to have a well tested footer, if the main content is buggy. Causing the main build or other developer's builds to break, would be worse though. Finding the balance is hard- unless one is the decider, spend some time reading the test code written by other team members.
Some people take the approach of testing the edge cases, and assume the main features will get worked out through usage. Considering getter/setters, I'd want a model class somewhere, that has a few tests on those methods, maybe test the database column type ranges. This at least tells me the network is ok, a database connection can be made, I have access to write to a table that exists, etc. Pages come and go, so don't consider a page load to be a substitute for an actual unit test. (A testing efficiency side note- if having automated testing based on the file update timestamp (autotest), that test wouldn't run, and you want to know asap)
I'd prefer to have better quality tests, rather than full coverage. But I'd also want an automated tool pointing out what isn't tested. If it's not tested, I assume it's broken. As you find failure, add tests, even if it's simple code.
If you are automating your testing, it doesn't matter how long it takes to run. You benefit every time that test code is run- at that point, you know a minimum of your code's functionality is working, and you get a sense of how reliable the tested functionality has been over time.
100% coverage shouldn't be your goal- good testing should be. It would be misleading to think a single test of a regular expression was accomplishing anything. I'd rather have no tests than one, because my automated coverage report reminds me the RE is unreliable.
The primary benefit you would get from writing a unit test or two for this method would be regression testing. If, sometime in the future, something was changed that impacted this method negatively, you would be able to catch it.
Whether or not that's worth the effort is ultimately up to you.
The secondary benefit I can see by looking at it would be testing edge cases, like, what it should do if last_name is "" or nil. That can reveal unexpected behavior.
(i.e. if last_name is nil, and first_name is "John", you get full_name => "John ")
Again, the cost-vs-benefit is ultimately up to you.
For generated code, no, there's no need to have test coverage there because, as you said, you didn't write it. If there's a problem, it's beyond the scope of the tests, which should be focused on your project. Likewise, you probably wouldn't need to explicitly test any libraries that you use.
For your particular method, it looks like that's the equivalent of a setter (it's been a bit since I've done Ruby on Rails) - testing that method would be testing the language features. If you were changing values or generating output, then you should have a test. But if you are just setting values or returning something with no computation or logic, I don't see the benefit to having tests cover those methods as if they are wrong, you should be able to detect the problem in a visual inspection or the problem is a language defect.
As far as the other methods, if you write them, you should probably have a test for them. In Test-Driven Development, this is essential as the tests for a particular method exist before the method does and you write methods to make the test pass. If you aren't writing your tests first, then you still get some benefit to have at least a simple test in place should this method ever change.