Testing a console app using SpecFlow - bdd

I want to use SpecFlow to test a console app. I want SpecFlow to start the console app and interact with it in exactly the same manner a user would via standard in/out.
Is this possible?

Yes, in fact I have one such solution open on my machine right now. My advice is "Don't do it!".
I open the solution and nCrunch (which is simply a hyper efficient test runner) starts up some tests for a scheduling system we use to co-ordinate multiple servers. Some of these tests check timescales and ensure that processes start and stop as they are supposed to. You can tell this because everytime I try and type something a calc.exe window pops up to steal my focus, and it really gets in the way. Is that what you really want to do?
Don't forget SpecFlow is really a business requirements automation system, and this is "a good thing". But so far you've only focused on the technical issues here.
I'd suggest that you think about your requirements again. Where you want to start a process and check it interacts with you, then simply test its arguments ande results. Use mock's if you need to isolate its functionality (like I didn't do when I wrote my tests, oops). Try and make your tests as simple as possible.
Think of it this way.
- Do you really need to test that Process.Start opens a Window? Surely MS got that right? :-)
- Do you really need to test that Console.ReadLine gets a string?
- And won't your tests will be simpler if you seperately test MyArgumentParser and MyBusinessLogic with mocks splitting them up?

Related

Looking for a decent scheme to implement acceptance tests environment using native Objective-C & Mac technologies

Background
I am looking for a way to implement a scheme similar to what Frank library uses to implement "Automated acceptance tests for native iOS apps", but I want this scheme rely on native iOS/MacOSX technologies. Sorry for the following TLDR but it deserves verbose explanation.
1. Here is a short overview of how Frank works:
it has Client and Server parts.
Server part is embedded into an application that we want to run acceptance tests against. Frank tutorials show us how to create a duplicate target of an app's main target and embed Frank HTTP server to it.
Client part - is a mainly a Cucumber that runs plain text scenarios: each scenario contains directives that should be run against an app (fill text field, touch button, ensure that a specific element exists on a page etc...). Also each scenario launches its own instance of app by this means providing a fresh state every time we enter a new scenario.
Client (Cucumber and Ruby-to-Objective-C bridge) communicates with Server (HTTP server embedded into an app) via HTTP protocol. It uses special conventions so client can tell server what an app should do so the specific scenario could be performed.
2. Recently I've found the following article written by the author of Frank Pete Hodgson:
http://blog.thepete.net/blog/2012/11/18/writing-ios-acceptance-tests-using-kiwi/
In which he suggests more simple way of writing acceptance tests for the developers who don't like to rely on external tools like Cucumber and Ruby. Let me quote the author himself:
Before I start, let me be clear that I personally wouldn’t use this approach to writing acceptance tests. I much prefer using a higher-level language like ruby to write these kinds of tests. The test code is way less work and way more expressive, assuming you’re comfortable in ruby. And that’s why I wanted to try this experiment. I’ve spoken to quite a few iOS developers over time who are not comfortable writing tests in ruby. They are more comfortable in Objective-C than anything else, and would like to write their tests in the same language they use for their production code. Fair enough.
This blog post inspired me to quickly roll my own very raw and primitive tool that does exactly what Pete described in his blog post: NativeAutomation.
Indeed, like it was described by Pete, it is possible to run acceptance tests by using just Kiwi/PublicAutomation setup placed in a simple OCTests target. I really liked it because:
It is just pure C/Objective-C. It was very easy to build initial bunch of C helpers, that look like Capybara helpers:
tapButtonWithTitle, fillTextFieldWithPlaceholder, hasLabelWithTitle and so on...
It does not require external tools and languages: no need in Cucumber/Ruby or anything else. NativeAutomation itself uses just PublicAutomation that Frank also uses. PublicAutomation is needed to simulate user interactions on an app's screen: touches, fills, gestures...
It is very handy to run these tests right from Xcode by just running a Cocoa Unit Tests bundle. (Though command-line builds are easy as well).
Problem
The problem Kiwi/PublicAutomation approach has is that the whole testing suite is embedded into the app's bundle. It means that after each scenario is run, it is not possible to reset application to force it to be in a fresh state before next scenario begins execution. The only way to workaround this problem is to write Kiwi's beforeEach hook with a method that performs a soft-reset of an application like:
+ (void)resetApplication {
[Session invalidateSession];
[LocationManager resetInstance];
[((NavigationController *)[UIApplication sharedApplication].delegate.window.rootViewController) popToRootViewControllerAnimated:NO];
[OHHTTPStubs removeAllStubs];
cleanDatabase();
shouldEventuallyBeTrue(^BOOL{
return FirstScreen.isCurrentScreen;
});
but in case of application which involve networking, asynchronous jobs, core data, file operations it becomes hard to perform a real teardown of a stuff left by previous scenario.
Question
The problem described above made me thinking about if it is possible to implement a more complex approach similar to Frank's approach, with a second app that works apart from main app's bundle and does not rely on external tools like Cucumber(Ruby).
Here is how I see the way it could be done.
Besides a main app (MainApp) there is a second iOS (or Mac OS X) app (AcceptanceTestsRunnerApp) that contains the whole acceptance tests suite and runs this suite against a main app bundle:
It does fire up new simulator instance before it enters each new scenario and executes current scenario against current simulator's app's instance.
The question is:
I am not well awared about Mac OSX or iOS technologies that would allow me to do that: I don't know if it is even possible to setup a Mac OS X / iOS application (AcceptanceTestsRunnerApp) that could take a control over a main app (MainApp) and run acceptance tests scenarios against it.
I will be thankful for any thoughts/suggestions that people who feel more comfortable with having native Objective-C tool for writing acceptance tests for iOS applications might have.
UPDATED LATER
...I did read some documentation about XPC services but the irony about it is that scheme I am looking for should be quite opposite to the scheme XPC documentation suggests:
Ideally I want my AcceptanceTestsRunnerApp dominate over MainApp: run it and control it (user interactions, assertions about view hierarchy) via some object proxying to MainApp's application delegate while XPC services setup would assume XPC service (AcceptanceTestsRunnerApp) to be subordinate to an app (MainApp) and will require XPC service to live inside app's bundle which I want to avoid by all means.
...My current reading is Distributed Objects Programming Topics. It seems to me that I will find my answer there. If no one provides me with guides or directions I will post an answer about my own researches and thoughts.
...This is the next step in my quest: Distributed Objects on iOS.
KIF and the port of Jasmine - Cedar to iOS work fairly well from my experience.
However, I also have had lots of good usage out of calabash, which like Frank, is powered by gherkin.

How to Catch up On Tests for Rails Site

I learned Rails to create a website and the basic version is up and running. Unfortunately, I only wrote a couple of tests for my code. What should I do now to get test coverage for my code? It will be difficult to go back now and write tests for all my previous code. Would it make sense to use a recording tool like Selenium to visit the sites and record tests? Is there a specific recording tool for Rails?
(In short, how can one catch up to get test coverage on code that doesn't have enough tests?)
First, quantitatively measure your test coverage with a tool like rcov.
You're right that integration tests will provide more coverage, so they are the right choice for your goal. However, they provide less meaningful feedback when they fail, so you'll want to augment or replace them with more isolated tests in the future.
I expect that recording tests will result in a test whose source is hard for humans to read. So, you might want to consider capybara-webkit instead (but don't use capybara 2.1 yet).
Finally, because your current testbase is small, you have an opportunity to consider rspec or shoulda-context.
Truth is, you're going to just have to go back and right all the tests.
First of, no kind of "recording" tool would help with the unit and controller tests.
Second, if you did use something to "record" feature/integration tests, what exactly would you be testing? If you aren't specific to what functionally you expect to be there, how can you be sure your app is even doing now what it's supposed to be doing?

require to execute a script in between a test case

I have a scenario where, I have a RoR application, mysql, and there is a workflow, where
end user will follow that workflow, and register her software
software is local to end user, running on her machine
in between this workflow, I make a http request to this software and it responds back
this hand shaking take place between rails app and that software
updating a couple of entries in db
and now I have to write a test case for this
that after this workflow done,
proper entries are been added to db
checking whether workflow is executed successfully
plus hand shaking took place well, so a complete cycle
And Am looking for a best approach here to go with
For now, we do not have prepared, or planning for a nice way of testing entire app here, but just preparing a few important test cases only. And this one is the first one of this kind. So far we were doing it manually.
Now being lazy, we want to automate this, and I am thinking of using watir. I have a software simulator for hand shaking, I could execute that simulator in watir and get this whole cycle tested.
Does this sound good that my watir/rb script is
executing a script
checking db status
executing workflow
stopping that script
checking for db status
But obvious all ruby/rails units involved here would have their own unit test cases prepared apart, but I am interested in testing this whole cycle.
Any better suggestions, comments?
It's important to have tests at the unit AND functional level, IMO, so I think your general approach is good.
Watir, or Selenium/WebDriver would be good tools to use. If you don't already have an approach in mind, you should check out Cheezy's (Jeff Morgan's) page-object gem. It works with Watir-webdriver and Selenium-webdriver.
I like that you explicitly call out hitting the database to check for proper record creation. It's really important to hit the DB as a test oracle to ensure your system's working properly.
Don't want to start a philosophical debate, but I'll say that going down the road you're thinking has been a time killer for me in the past. I'd strongly recommend spending the time refactoring your code into a structure that can be unit tested. One of the truly nice side effects of focusing on unit tests is that you end of creating a code base that follows the Principle of Single Responsibility, whether you realize it or not.
Also, consider skimming this debate about the fallacies of higher-level testing frameworks
Anyway, good luck, friend.

What's the best strategy for adding tests to an existing rails project?

There is an existing project that is already deployed in production. We want to add some tests on it (the sooner the better) and I have to choose between going the BDD way (rspec/cucumber) or the TDD way (TestUnit). I am really starting with BDD and I am wondering what could be the best decision to take? I am affraid that using rspec/cucumber on an existing rails project (which was deployed this week and requires really fast iterations) will be quite hard to do (especially that it is not supposed to be used this way, I mean we are supposed to write stories/features first and iterate from there).
TestUnit could be more reasonable, may be.
Do you have any thoughts on that? An experience to share? Some advices?
I believe the easiest way to get coverage for an existing application is to use cucumber. This will allow to describe and document how to the website/application should work (and will keep working).
Because it works from the outside in, this also has the advantage you do not need to comprehend the inner workings completely yet. At the same time, you test all layers of the application (model-view-controller) in one test.
When you start actually changing code, then I would start adding the unit-tests for the code you are changing, using your favourite testing framework. I personally favour rspec, but as you know this is a personal choice :)

How do I fake input for form testing?

I'm building a test harness for my Delphi 2009 app. Testing the logic is fairly simple. Making sure the forms work properly is proving a bit more complicated. I'd like a way to simulate real user input, to open a form, make it think there's a user typing certain things and clicking in certain places, and make sure it reacts correctly. I'm sure there's a way to do this, I just don't know what it is. Does anyone know how to do it?
DUnit has GUITesting.pas whicih extends testing so you can send clicks, keys and text to controls on form, but that is about it.
Last year there where mentions of Zombie GUI testing framework that was used internaly by CodeGear developers, but nothing since Steve left for Falafel.
TestComplete is a good choice. Another commercial option for GUI testing is SmarteScript:
well for .net there's NUnitForms for GUI testing a win application.
don't know any open source for delphi though.
Test Complete can test delphi forms but it's not free.
There's 2 parts to this, firstly how do you automate the GUI, and secondly how do I then 'test' whether its working/not working.
Firtsly: To automate the GUI on windows try using AutoIT. Its a free tool for controlling windows interfaces, sending keyboard input events etc. http://www.autoitscript.com/autoit3/
Secondly: Testing is a big field, and I won't try and give you a whirlwind tour. But the mechanics of driving the GUI and testing the results could be handled using AutoIT's built in Basic like language or by using it in conjunction with a language like Ruby and TestUnit (rubys built-in unit testing framework).
If there is nothing Deliphi-specic out there and you need a quick solution try some easy to learn scripting solutions like AutoIt.
For a bit more sophisticated scripting, you might have a look on Scripted GUI Testing with Ruby.
But be aware, that you should not test too much functionality via the GUI, because such tests are very likely to break. If you end up with too much GUI testing you may need to rethink the design: Decouple logic from the GUI and test the logic directly with some xUnit framework.
Also have a look on a similar question about windows forms test automation.
It seems like DUnit has some gui-testing functionality: delphiextreme.com
Not exactly an answer to your question, but there is a very good page (IMHO of course) about GUI Architectures by Martin Fowler, featuring the "Humble View" architecture as the last entry, which is geared specifically towards test-driven software development. Worth looking into.
This will of course not help you with the task of testing whether all controls are wired correctly and handle all necessary events, but it should help to minimize the amount of GUI code that does need testing.
OpenCTF is good for you.
Quote:
OpenCTF is a test framework add-on for Embarcadero Delphi® which
performs automatic checks of all components in Forms (or DataModules).
It provides an easy way to build automatic quality checks for large
projects where many components have to pass repeated tests.
Adding OpenCTF tests to a DUnit test suite requires only a few lines of code.
Writing your own custom component tests needs only a few seconds.
OpenCTF is based on the DUnit open source test framework and extends
it by specialized test classes and helper functions.
Please head here to download.

Resources