require to execute a script in between a test case - ruby-on-rails

I have a scenario where, I have a RoR application, mysql, and there is a workflow, where
end user will follow that workflow, and register her software
software is local to end user, running on her machine
in between this workflow, I make a http request to this software and it responds back
this hand shaking take place between rails app and that software
updating a couple of entries in db
and now I have to write a test case for this
that after this workflow done,
proper entries are been added to db
checking whether workflow is executed successfully
plus hand shaking took place well, so a complete cycle
And Am looking for a best approach here to go with
For now, we do not have prepared, or planning for a nice way of testing entire app here, but just preparing a few important test cases only. And this one is the first one of this kind. So far we were doing it manually.
Now being lazy, we want to automate this, and I am thinking of using watir. I have a software simulator for hand shaking, I could execute that simulator in watir and get this whole cycle tested.
Does this sound good that my watir/rb script is
executing a script
checking db status
executing workflow
stopping that script
checking for db status
But obvious all ruby/rails units involved here would have their own unit test cases prepared apart, but I am interested in testing this whole cycle.
Any better suggestions, comments?

It's important to have tests at the unit AND functional level, IMO, so I think your general approach is good.
Watir, or Selenium/WebDriver would be good tools to use. If you don't already have an approach in mind, you should check out Cheezy's (Jeff Morgan's) page-object gem. It works with Watir-webdriver and Selenium-webdriver.
I like that you explicitly call out hitting the database to check for proper record creation. It's really important to hit the DB as a test oracle to ensure your system's working properly.

Don't want to start a philosophical debate, but I'll say that going down the road you're thinking has been a time killer for me in the past. I'd strongly recommend spending the time refactoring your code into a structure that can be unit tested. One of the truly nice side effects of focusing on unit tests is that you end of creating a code base that follows the Principle of Single Responsibility, whether you realize it or not.
Also, consider skimming this debate about the fallacies of higher-level testing frameworks
Anyway, good luck, friend.

Related

Circle CI Analysis

I am curious if it is possible to automatically measure whether a test suite is flaky from the Circle CI interface. I would measure flaky as - fail and pass with a re-trigger. Is this possible to easily do?
Not at the moment, as far as I'm aware. I've done an extensive research on build insights in general, which included flaky tests analysis and monitoring, and finally decided to build my own tool. The good news is that last I checked, they seem to be focusing on creating better insights tools in addition to what they currently have. They'll tell you all about it if you reach out to them.
In the interim, you have a few options:
Ask them how far away are they from supporting for your idea of what a flaky is (I'm hoping this point gets outdated shortly as they work on it)
Consume their data through their decent enough API, and build your own tool in the interim and crunch the numbers yourself (this is what I ended up doing and it isn't too bad)
For example: generally speaking, a flaky for my team is a test that failed more than a few times over a large timespan. Their API gives you whether a build failed or not, which test failed, when and how. This gave me enough to work with and figure out whether I consider that spec failure as flaky or not. I'd assume yours is sort of similar, with maybe the only difference being whether it was re-triggered (unsure if they provide that info specifically, but you could refer to the workflow, commit and build ID to figure that out; e.g. if the build ID of a new run is the same).
With that beind said, the "how easy is it?" part of your question is something I can't really say for certain. It was a relatively easy learning curve to go through their APIs, get familiar with it, run a couple of requests, look at the data, massage it, store in the DB, then build a web interface around it. But I'm not sure how much familiarity and experience the people building the tool on your end have.

What are all the pieces to an effective TDD strategy?

I'm really getting frustrated with learning how to properly develop software using TDD. It seems that everyone does it differently and in a different order. At this point, I'd just like to know what are all the considerations? This much is what I've come up with: I should use rspec, and capybara. With that said, what are all the different types of test I need to write, to have a well built and tested application. I'm looking for a list that comprises the area of my application being tested, the framework needed to test it, and any dependencies.
For example, it seems that people advise to start by unit testing your models, but when I watch tutorials on TDD it seems like they only write integration test. Am I missing something?
Well, the theme "how do you TDD" is as much out there in the open as the theme "how do you properly test?". In Ruby, and more specifically in Rails, rspec should be the tool to start with, but not be done with. RSpec allows you to write Unit Tests for your components, to test them separately. In the Rails context, that means:
test your models
test your controllers
test your views
test your helpers
test your routes
It is a very good tool not exactly rails-bound, it is also used to test other frameworks.
After you're done with RSpec, you should jump to cucumber. Cucumber (http://cukes.info/) is the most used tool (again, for the Rails environment) to write integration tests. You can then integrate capybara on cucumber.
After you're done with cucumber, you'll be done with having tested your application backend and (part of) its HTML output. That's when you should also test your javascript code. How to do that? First, you'll have to Unit test it. Jasmine (http://pivotal.github.com/jasmine/) is one of the tools you might use for the job.
Then you'll have to test its integration in your structure. How to do that? You'll come back to cucumber and integrate selenium (http://seleniumhq.org/) with your cucumber framework, and you'll be able to test your integration "live" in the browser, having access to your javascript magic and testing it on the spot.
So, after you're done with these steps, you'll have covered most of the necessary steps to have a well-integrated test environment. Are we done? Not really. You should also set a coverage tool (one available: https://github.com/colszowka/simplecov) to check if your code is being really well tested and no loose ends are left.
After you're done with these morose steps, you should also do one last thing, in case you are not developing it all alone and the team is big enough to make it still unmanageable by itself: you'll set a test server, which will do nothing other than run all the previous steps regularly and deliver notifications about its results.
So, all of this sets a good TDD environment for the interested developer. I only named the most used frameworks in the ruby/rails community for the different types of testing, but that doesn't mean there aren't other frameworks as or more suitable for your job. It still doesn't teach you how to test properly. For that there's more theory involved, and a lot of subdebates.
In case I forgot something, please write it in a comment below.
Besides that, you should approach how you test properly. Namely, are you going for the declarative or imperative approach?
Start simple and add more tools and techniques as you need them. There are many way to TDD an app because every app is different. One way to do that is to start with an end-to-end test with Rspec and Capybara (or Cucumber and Capybara) and then add more fine-grained tests as you need them.
You know you need more fine-grained tests when it takes more than a few minutes to make a Capybara test pass.
Also, if the domain of your application is non-trivial it might be more fruitful for you to start testing the domain first.
It depends! Try different approaches and see what works for you.
End-to-end development of real-world applications with TDD is an underdocumented activity indeed. It's true that you'll mostly find schoolbook examples, katas and theoretical articles out there. However, a few books take a more comprehensive and practical approach to TDD - GOOS for instance (highly recommended), and, to a lesser extent, Beck's Test Driven Development by Example, although they don't address RoR specifically.
The approach described in GOOS starts with writing end-to-end acceptance tests (integration tests, which may amount to RSpec tests in your case) but within that loop, you code as many TDD unit tests as you need to design your lower-level objects. When writing those you can basically start where you want -from the outer layers, the inner layers or just the parts of your application that are most convenient to you. As long as you mock out any dependency, they'll remain unit tests anyway.
I also have the same question when I started learning rails, there're so many tools or methods to make the test better but after spending to much time on that, I finally realized that you could simply forget the rule that you must do something or not, test something that you think it might have problem first, then somewhere else. Well ,it needs time.
that's just my point of view.

What's the best strategy for adding tests to an existing rails project?

There is an existing project that is already deployed in production. We want to add some tests on it (the sooner the better) and I have to choose between going the BDD way (rspec/cucumber) or the TDD way (TestUnit). I am really starting with BDD and I am wondering what could be the best decision to take? I am affraid that using rspec/cucumber on an existing rails project (which was deployed this week and requires really fast iterations) will be quite hard to do (especially that it is not supposed to be used this way, I mean we are supposed to write stories/features first and iterate from there).
TestUnit could be more reasonable, may be.
Do you have any thoughts on that? An experience to share? Some advices?
I believe the easiest way to get coverage for an existing application is to use cucumber. This will allow to describe and document how to the website/application should work (and will keep working).
Because it works from the outside in, this also has the advantage you do not need to comprehend the inner workings completely yet. At the same time, you test all layers of the application (model-view-controller) in one test.
When you start actually changing code, then I would start adding the unit-tests for the code you are changing, using your favourite testing framework. I personally favour rspec, but as you know this is a personal choice :)

Best way to add tests to an existing Rails project?

I have a Rails project which I neglected to build tests for (for shame!) and the code base has gotten pretty large. A friend of mine said that RSpec was a pain to use unless you use it from the beginning. Is this true? What would make him say that?
So, considering the available tests suites and the fact that the code base is already there, what would be my best course of action for getting this thing testable? Is it really that much different than doing it from the beginning?
This question came up recently on the RSpec mailing list, and the advice we generally gave was:
Don't bother trying to retro-fit specs to existing, working, code unless you're going to change it - it's exhausting and, unless the code needs to be changed, rather pointless.
Start writing specs for any changes you make from now on. Bug fixes are an especially good opportunity for this.
Try to train yourself into the discipline that before you touch the code, first of all write a failing example (=spec) to drive out the change.
You may find that the design of code which wasn't driven out by code examples or unit tests makes it awkward to write tests or specs for. This is perhaps what your friend was alluding to. You will almost certainly need to learn a few key refactoring techniques to break up dependencies so that you can exercise each class in isolation from your specs. Michael Feathers' excellent book, Working Effectively With Legacy Code has some great material to help you learn this delicate skill.
I'd also encourage you to use the built-in spec:rcov rake task to generate code coverage stats. It's extremely rewarding to watch these numbers go up as you start to get your codebase under test.
Maybe start with the models? They should be testable in isolation, which ought to make them the lowest-hanging fruit.
Then pick a model and start writing tests that say what it does. As you go along, think about other ways to test the code - are there edge cases that maybe you're not sure about? Write the tests and see how the model behaves. As you develop the tests, you may see areas in the code that aren't as clean and de-duplicated (DRY) as they might be. Now you have tests, you can refactor the code, since you know that you're not affecting behaviour. Try not to start improving design until you have tests in place - that way lies madness.
Once you have the models pinned down, move up.
That's one way. Alternatives might be starting with views or controllers, but you may find it easier to start with end-to-end transaction tests andwork your way into smaller and smaller pieces as you go along.
The accepted answer is good advice - although not practical in some instances. I recently was faced with this problem on a few apps of mine because I NEEDED tests for existing code. There simply was no other way around it.
I started off doing all unit tests, then moved onto functionals.
Get in the habit of writing failing tests for any new code, or whenever you're going to change a part of the system. I've found this has helped me gain more knowledge of testing as I go.
Use rcov to measure your progress.
Good luck!
Writing tests for existing code may reveal bugs in your code. These tests will force you to look at the existing code so you can see what test you need to write in order to get it to pass and you may see some code that could possibly be written better, or is now useless.
Another tip is to write a test when you encounter a bug so it should never re-occur, this is called regressional testing.
Retrofitting specs is not inevitably a bad idea. You go from working code to working code with known properties which allows you to understand whether any future change breaks anything. At the moment if you need to make a change how can you know what it will affect?
What people mean when they say that it is hard to add tests/specs to exisitng code is that code which is hard to test is often highly coupled which makes it hard to write low-level isolated tests.
One idea would be to start with full-stack tests using something like the RSpec story runner. You can then work from the 'outside in' isolating what you can in low-level isolated tests and gradually untangle the harder code bit by bit.
You can start writing "characterization tests". With this,you might what to try out the pretentious gem here:
It is still a work in progress though.

Ruby on Rails - Why use tests?

I'm confused about what the various testing appliances in Ruby on Rails are for. I have been using the framework for about 6 months but I've never understood the testing part of it. The only testing I've used is JUnit3 in Java and that only briefly.
Everything I've read about it just shows testing validations. Shouldn't the validations in rails just work? It seems more like testing the framework than testing the your code. Why would you need to test validations?
Furthermore, the tests seem super fragile to any change in your code. So if you change anything in your models, you have to change your tests and fixtures to match. Doesn't this violate the DRY principle?
Third, writing test code seems to take alot of time. Is that normal? Wouldn't it just be faster to refresh my browser and see if it worked? I already have to play with my application just to see if it flows correctly and make sure my CSS hasn't exploded. Why wouldn't manual testing be enough?
I've asked these questions before and I haven't gotten more than "automated testing is automated". I am smart enough to figure out the advantages of automating a task. My problem is that costs of writing tests seem absurdly high compared to the benefits. That said, any detailed response is welcome because I probably missed a benefit or two.
Shouldn't
the validations in rails just work? It
seems more like testing the framework
than testing the your code. Why would
you need to test validations?
The validations in Rails do work -- in fact, there are unit tests in the Rails codebase to ensure it. When you test a model's validation, you're testing the specifics of the validation: the length, the accepted values, etc. You're making sure the code was written as intended. Some validations are simple helpers and you may opt not to test them on the notion that "no one can mess up a validates_numericality_of call." Is that true? Does every developer always remember to write it in the first place? Does every developer never accidentally delete a line on a bad copy paste? In my personal opinion, you don't need to test every last combination of values for a Rails' validation helper, but you need a line to test that it's there with the right values passed, just in case some punk changes it in the future without proper forethought.
Further, other validations are more complex, requiring lots of custom code -- they may warrant more thorough testing.
Furthermore, the tests seem super
fragile to any change in your code. So
if you change anything in your models,
you have to change your tests and
fixtures to match. Doesn't this
violate the DRY principle?
I don't believe it violates DRY. They're communicating (that's what programming is, communication) two very different things. The test says the code should do something. The code says what it actually does. Testing is extremely important when there is a disconnect between those things.
Test code and application code are intimately linked, obviously. I think of them as two sides of a coin. You wouldn't want a front without a back, or a back without a front. Good test code reinforces good application code, and vice versa. The two together are used to understand the whole problem that you're trying to solve. And well written test code is documentation -- it shows how the application code should be used.
Third, writing test code seems to take
alot of time. Is that normal? Wouldn't
it just be faster to refresh my
browser and see if it worked? I
already have to play with my
application just to see if it flows
correctly and make sure my CSS hasn't
exploded. Why wouldn't manual testing
be enough?
You've only worked on very small projects, for which that testing is arguably sufficient. However, when you work on a project with several developers, thousands or tens of thousands of lines of code, integration points with web services, third party libraries, multiple databases, months of development and requirements changes, etc, there are a lot of other factors in play. Manual testing is simply not enough. In a project of any real complexity, changes in one place can often have unforeseen results in others. Proper architecture helps mitigate this problem, but automated testing helps as well (and helps identify points where the architecture can be improved) by identifying when a change in one place breaks another.
My problem is that
costs of writing tests seem absurdly
high compared to the benefits. That
said, any detailed response is welcome
because I probably missed a benefit or
two.
I'll list a few more benefits.
If you test first (Test Driven Development) your code will probably be better. I haven't met a programmer who gave it a solid shot for whom this wasn't the case. Testing first forces you to think about the problem and actually design your solution, versus hacking it out. Further, it forces you to understand the problem domain well enough to where if you do have to hack it out, you know your code works within the limitations you've defined.
If you have full test coverage, you can refactor with NO RISK. If a software problem is very complicated (again, real world projects that last for months tend to be complicated) then you may wish to simplify code that has previously been written. So, you can write new code to replace the old code, and if it passes all of your tests, you're done. It does exactly what the old code did with respect to the tests. For a project that plans to use an agile development method, refactoring is absolutely essential. Changes will always need to be made.
To sum up, automated testing, especially test driven development, is basically a method of managing the complexity of software development. If your project isn't very complex, the cost may outweigh the benefits (although I doubt it). However, real world projects tend to be very complex, and the results of testing and TDD speak for themselves: they work.
(If you're curious, I find Dan North's article on Behavior Driven Development to be very helpful in understanding a lot of the value in testing: http://dannorth.net/introducing-bdd)
I haven't really used Rails much, but I would think that these automated tests would be useful as smoke tests to be sure that the thing you just did doesn't break something that you did last week. This will become increasingly important as your project grows.
Also, writing the tests before you write the code (using the Test-Driven-Development model) will help you write the code better and faster, since the tests force you to fully think the problem through. It will also help you to know where to break up complex methods into smaller methods that you can test individually.
You are right, writing and maintaining tests takes a lot of time. Sometimes more time than the code itself. However, it can save you time in bug fixing and refactoring for the reasons above.
Tests should validate your application logic. Personally, I think my most important tests are the ones I run in Selenium. They check that what shows up in the browser is actually what I expect to see. However, if that's all I had, then I would find it hard to debug - it helps to have lower level tests as well and integration, functional, and unit tests are all useful tools. Unit tests let you check that the model behaves the way you expect it to (and that means every method, not just validatins). Validatins will certainly Just Work, but only if you get them right. If you get them wrong, they will Just Work, but not the way you expected. Writing a couple of lines of test is quicker than debugging later on.
A simple example like the one at http://wiseheartdesign.com/2006/01/16/testing-rails-validations just checks validations in a unit test. The O'Reilly article at http://www.oreillynet.com/pub/a/ruby/2007/06/07/rails-testing-not-just-for-the-paranoid.html?page=1 is a bit more complete (though still fairly basic).
Automated testing is particularly useful in regression testing where you change something and run a suite of tests to check that you didn't break anything else.
Tests are a form of repetition, but they don't violate DRY because they express things in a different way. A test says "I did X so Y should happen". Code says "X happened, so now I need to do Z, which happens to cause Y to happen". i.e. a test stimulates a cause and checks an effect, while code responds to a cause, and effects something.
A lot of the testing tutorials and the sample tests created by the Rails generators are pretty lame and IMHO that can give the mistaken impression that you're supposed to test stupid stuff like the built in Rails methods, etc.
Since Rails has it's own test suite, there's no point in you writing or running tests that only test built in Rails functionality. Your tests should exercise the code you're writing! :-)
As for the relative merit of running tests vs just refreshing in your browser.. The larger your app gets, the more of a pain in the ass it is to have to manually run through numerous scenarios and edge cases to make sure nothing in your application has broken. Eventually, you'll stop testing your entire application after each change and just start "spot testing" the areas you think should have been affected. Inevitably, you'll find something that used to work months ago that is now completely broken, and you have no certainty when it broke or which changes broke it. After that happens enough times... you'll come to value automated testing.... :-)
For example:
I work on a 25000+ lines project (yes, in rails 1.2) and last monday I was told if I could make Users dissapear from every list except admin ones if they had "leave_date" attribute set to the past.
You can rewrite every list action (50+) to put a
#users.reject!{|u| Date.today > u.leave_date}
Or you can override the "find" method (DRY;-), but only if you have tests (on everything that finds users!) you will know you didn't break anything by overriding User#find !!
Everything I've read about it just shows testing validations. Shouldn't the validations in rails just work? It seems more like testing the framework than testing the your code. Why would you need to test validations?
There's a good Railscast showing one way to test controllers.

Resources