How do scalatest and spock differ? what is the added-value of each ? Which is more agile for Behavior Driven Development (BDD)? Please could you share some thoughts on the matter ?
I want to start BDD, I want to pick one between the two, therefore I'd like to make an educated decision. Hence i get the maximum of information first, especially given that I'm a java programmer and that scala seem to have a learning curve that is important.
Any advise or ideas or return from experiences would be welcome.
Many thanks
In a nutshell, I would recommend to use ScalaTest for testing Scala code, and Spock for testing Java or Groovy code. (Of course, it's also perfectly possible to test Java code with ScalaTest.) Why not give both tools a shot and stick with the one that you are more comfortable with?
Disclaimer: I'm the creator of Spock.
While I agree with Peter's answer, I'd like to give my perspective on this.
Both ScalaTest and Spock provide fluent BDD test. However I find that the best feature of Spock is that you can to create single scenario with multiple set of data and expected results. This is very much like Cucumber's Scenario Outline, only run at unit test level.
I can't find other unit test framework/library that does that.
In summary, if you need to test multiple input/output for single test scenario, use Spock, otherwise, feel free to choose whatever you feel comfortable with.
Related
Suppose you are building an API in Rails. Is it enough if we write request specs alone without the model specs, controller specs and the view specs? Why do we need unit testing if we do acceptance and functional testing or feature testing for front end web projects. I insisted on doing unit testing as it allows you to write decoupled code but my colleague is against it. What are the best practices on this in the ruby on rails community?
If you only have time to do one type of testing, and you are writing an API, then it might make sense to only do feature testing by simply calling your API endpoints. After all, it's pretty important that those endpoints return the expected results.
However, when your feature tests start breaking, you will potentially have a terrible time figuring out the source of the problems without unit tests. Is there a core piece of your software that most of your endpoints are using? Good luck refactoring that without a robust set of unit tests.
But that really speaks to what you have to figure out -- is your core set pretty stable? Are you really just adding features or new endpoints? If so, you can probably get away with a heavy feature test approach.
Rails and testing go together like peas and carrots. Here is a great resource that highlights (better than I ever could) the importance of using tests (of all degrees) in your Rails projects. Hope this helps!
Not sure about "the community", but in my opinion it depends on the complexity of the project. If it's a very straightforward API project, doing only feature tests may be fine.
But if the project becomes larger, unit tests allow you to better pinpoint errors in case anything breaks. I.e. you don't see "there's a bug somewhere in feature X" but "this or that class does not work when invoking a particular method with specific arguments".
I'm really getting frustrated with learning how to properly develop software using TDD. It seems that everyone does it differently and in a different order. At this point, I'd just like to know what are all the considerations? This much is what I've come up with: I should use rspec, and capybara. With that said, what are all the different types of test I need to write, to have a well built and tested application. I'm looking for a list that comprises the area of my application being tested, the framework needed to test it, and any dependencies.
For example, it seems that people advise to start by unit testing your models, but when I watch tutorials on TDD it seems like they only write integration test. Am I missing something?
Well, the theme "how do you TDD" is as much out there in the open as the theme "how do you properly test?". In Ruby, and more specifically in Rails, rspec should be the tool to start with, but not be done with. RSpec allows you to write Unit Tests for your components, to test them separately. In the Rails context, that means:
test your models
test your controllers
test your views
test your helpers
test your routes
It is a very good tool not exactly rails-bound, it is also used to test other frameworks.
After you're done with RSpec, you should jump to cucumber. Cucumber (http://cukes.info/) is the most used tool (again, for the Rails environment) to write integration tests. You can then integrate capybara on cucumber.
After you're done with cucumber, you'll be done with having tested your application backend and (part of) its HTML output. That's when you should also test your javascript code. How to do that? First, you'll have to Unit test it. Jasmine (http://pivotal.github.com/jasmine/) is one of the tools you might use for the job.
Then you'll have to test its integration in your structure. How to do that? You'll come back to cucumber and integrate selenium (http://seleniumhq.org/) with your cucumber framework, and you'll be able to test your integration "live" in the browser, having access to your javascript magic and testing it on the spot.
So, after you're done with these steps, you'll have covered most of the necessary steps to have a well-integrated test environment. Are we done? Not really. You should also set a coverage tool (one available: https://github.com/colszowka/simplecov) to check if your code is being really well tested and no loose ends are left.
After you're done with these morose steps, you should also do one last thing, in case you are not developing it all alone and the team is big enough to make it still unmanageable by itself: you'll set a test server, which will do nothing other than run all the previous steps regularly and deliver notifications about its results.
So, all of this sets a good TDD environment for the interested developer. I only named the most used frameworks in the ruby/rails community for the different types of testing, but that doesn't mean there aren't other frameworks as or more suitable for your job. It still doesn't teach you how to test properly. For that there's more theory involved, and a lot of subdebates.
In case I forgot something, please write it in a comment below.
Besides that, you should approach how you test properly. Namely, are you going for the declarative or imperative approach?
Start simple and add more tools and techniques as you need them. There are many way to TDD an app because every app is different. One way to do that is to start with an end-to-end test with Rspec and Capybara (or Cucumber and Capybara) and then add more fine-grained tests as you need them.
You know you need more fine-grained tests when it takes more than a few minutes to make a Capybara test pass.
Also, if the domain of your application is non-trivial it might be more fruitful for you to start testing the domain first.
It depends! Try different approaches and see what works for you.
End-to-end development of real-world applications with TDD is an underdocumented activity indeed. It's true that you'll mostly find schoolbook examples, katas and theoretical articles out there. However, a few books take a more comprehensive and practical approach to TDD - GOOS for instance (highly recommended), and, to a lesser extent, Beck's Test Driven Development by Example, although they don't address RoR specifically.
The approach described in GOOS starts with writing end-to-end acceptance tests (integration tests, which may amount to RSpec tests in your case) but within that loop, you code as many TDD unit tests as you need to design your lower-level objects. When writing those you can basically start where you want -from the outer layers, the inner layers or just the parts of your application that are most convenient to you. As long as you mock out any dependency, they'll remain unit tests anyway.
I also have the same question when I started learning rails, there're so many tools or methods to make the test better but after spending to much time on that, I finally realized that you could simply forget the rule that you must do something or not, test something that you think it might have problem first, then somewhere else. Well ,it needs time.
that's just my point of view.
I am trying to get started with BDD and found a view blog posts about MSpec and SpecFlow. I'm currently not quite sure when I would use which and what the advantages/disadvantages of either framework are.
Looking at the documentation it seems that MSpec uses the context specification style whereas SpecFlow uses Given/When/Then style. I don't really mind either but I would like to know if there are any pitfalls to watch out for further down the track when the project/test suite grows.
Basically some real world advice/feedback of someone who uses it in their every day work would be great.
So I've used both.
I like the mspec workflow in away because its an easier sell for me to speak to users and say.
"When logging in"
"I should return to the page I requested"
When I've worked for organisations that have bought more into active collaboration (read agile) I've used the Given When Then pattern. That organisation was used to user stories so they were used to a more rigid style of specification. Also we were using more than one tool to feed the specs into. so the 'text only' feature files could be reused between tools.
In my own projects I use SpecFlow for the 'outside' and 'mspec' for inside of tests.
If I was to give someone advice it would be to use specflow if non technical people are writing the outside specs and mspec if a developer is writing the.
Bad points:
Mspec is class explosion
SpecFlow is a slower workflow
Good points:
Mspec is a more natural language
Specflow is better for reusability for steps.
The bottom line is they work well together.
One disadvantage of mspec is you cannot run in parallel whereas with specflow runner you can. That is a big performance issue.
I've been searching on how to do Unit testing and find that it is quite easy, but, what I want to know is, In a asp.net mvc application, what should be REALLY important to test and which methods you guys use?
I just can't find a clear answer on about WHAT TO REALLY TEST when programming unit tests.
I just don't want to make unnecessary tests and lose development time doing overkill tests.
You should unit test as much as possible of your application.
For every line of code you write, you need to verify that it works. If you don't unit test it, you need to test it in some other fashion. Even starting up the site and clicking around is a sort of testing.
When you compare unit testing with other sorts of testing (including running the site and manually using it), unit tests tend to give the best return of investment because they are relatively easy to write and maintain, and can give you rapid feedback on whether you just introduced a regression bug or not.
I'm not saying that there's no overhead in writing unit tests - there is, but there's overhead in any sort of testing, and a big overhead in not testing at all (because regression bugs slip through quite easily).
It's still good practice to supplement unit tests with other types of tests, but a good unit test suite offers an excellent regression test suite.
Ron Jeffries says "Test everything that could possibly break."
Someone else - I think it was Kent Beck, but I can't find a reference - says, "Only test the code you want to work."
Either of these is a pretty good strategy.
I actually don't think anything needs to be tested in MVC. I think all your business logic, rules etc need testing but the Views and Controllers?
The only real reason I can see to test a Controller is for integration testing. If all your business logic is correct then that should be a simple test always returning true.
Controllers should really only get data from the view and pass data to it so....
As for views, what sort of testing can be done there other than to open the view and see what it does?
When I write my projects there is next to no code in the controller and I put all the grunt in my business engine which I have extensive tests for.
Unit testing is good for testing of services/models. But when you need in testing of application functionality the better choice will be functional tests (i.e. Selenium)
I admit that I have almost none experience of unittesting. I did a try with DUnit a while ago but gave up because there was so many dependencies between classes in my application.
It is a rather big (about 1.5 million source lines) Delphi application and we are a team that maintain it.
The testing for now is done by one person that use it before release and report bugs. I have also set up some GUI-tests in TestComplete 6, but it often fails because of changes in the application.
Bold for Delphi is used as persistance framework against the database.
We all agree that unittesting is the way to go and we plan to write a new application in DotNet with ECO as persistance framework.
I just don't know where to start with unittesting...
Any good books, URL, best practice etc ?
Well, the challenge in unit testing is not the testing itself, but in writing testable code. If the code was written not thinking about testing, then you'll probably have a really hard time.
Anyway, if you can refactor, do refactor to make it testable. Don't mix object creation with logic whenever possible (I don't know delphi, but there might be some dependency injection framework to help in this).
This blog has lots of good insight about testing. Check this article for instance (my first suggestion was based on it).
As for a suggestion, try testing the leaf nodes of your code first, those classes that don't depend on others. They should be easier to test, as they don't require mocks.
Writing unit tests for legacy code usually requires a lot of refactoring.
Excellent book that covers this is Michael Feather's "Working Effectively with Legacy Code"
One additional suggestion: use a unit test coverage tool to indicate your progress in this work. I'm not sure about what the good coverage tools for Delphi code are though. I guess this would be a different question/topic.
Working Effectively with Legacy Code
One of the more popular approaches is to write the unit-tests as you modify the code. All new codes gets unit tests, and for any code you modify you first write its test, verify it, modify it, re-verify it, and then write/fix any tests that you need due to your modifications.
One of the big advantages of having good unit test coverage is being able to verify that the changes you make don't inadvertently break something else. This approach allows you to do that, while focusing your efforts on your immediate needs.
The alternate approach I've employed is to develop my unit tests via Co-Ops :)
When you work with legacy code, mock objetcs are really usefull to build unit tests.
Take a look at this question regarding Delphi and mocks: What is your favorite Delphi mocking library?
For .Net unittesting read this : "The Art of Unit Testing: with Examples in .NET"
About best pratices :
What you said is right : Sometimes, it's difficult to write unit tests because of the dependancy between classes...
So write unit tests just after or just before ;-) the implementation of the classes. Like this, if you have some difficulties to write the tests, maybe it means you have a design problem !