I'm walking through the FrontStore series tutorial on TDD in MVC (Part 3 by Rob Conery/ASP.NET). The test I'm concerned with is the CatalogRepository_Each_Category_Contains_5_Products(). Until I get to that test, everything was working fine. Now, I've gone through every line that makes this test (including the test itself, the TestCatalogRepository, ...). I've also compared my code to that of Rob, but the test keeps failing.
I also checked the source code from CodePlex, that test was not there.
Now, I wonder if I can put a break point somewhere to check the local values as the test is being executed? If not, something similar?
Thanks for helping.
Debugging tests should be exactly the same as debugging your code - put a break point and run the test in debug (in MSTest ctrl+R, ctrl+T).
Depending on you testrunner (Nunit or VS) you start the test in debug mode (VS) or you start the testrunner and attach to the proces of the testrunner (nunit)
Another approach would be to create unit tests that act as breakpoints. It would require refactoring the SUT.
Related
How I do I automate Xamarin.iOS unit test project.
For Android, I found this link which worked fine.
https://developer.xamarin.com/guides/android/troubleshooting/questions/automate-android-nunit-test/
Is there any references like this for Xamarin.iOS too?
At the time of writing this, I don't believe what you'd like to do is possible. If I take your meaning, you'd probably like to say "Run All" with some test runner (presumably in XS or VS) and then get the results immediately, but that's not how it works with Xamarin.iOS. I'm sure you've already glanced at Xamarin's iOS testing quick-start, but if not here that is.
You have to set up a Unit Test app that uses the Touch.Unit framework, fire up the test app, and "touch" the tests you'd like to run. My experience doing this has not been so great. The runner itself seems buggy and you're limited with in what other tools you can use (e.g., mocking frameworks won't work, assertions made with Shouldly won't register). I guess it's better than nothing, though!
I'm trying to use SpecFlow for a .net project. I'm new to SpecFlow. The Development team are using NUnit, so it would seem that SpecFlow would be a good option in conjunction with Cucumber. However, the Development team have come back say that SpecFlow cannot be used saying they do not have an API/Service that is available to use at the level required. Currently all of their automated tests are through the UI using Test Complete, I am keen to move to API level testing.
Can anyone explain to me why SpecFlow cannot be used, I'm sorry it's a newbie question but no one can answer it, I've asked everyone I can think of, surely the first steps would be to see if we can use SpecFlow with NUnit but perhaps not.
Can anyone give me a guide on my next steps, how can I be sure this isn't an option without righting it off without concern that it's just being blocked?
Thank you
SpecFlow has a unit test generator that generates unit test code for a variety of unit test frameworks. SpecFlow generates NUnit tests in its default configuration. The getting started page on specflow.org explains a quick way to get up and running with SpecFlow and NUnit, http://www.specflow.org/getting-started/.
If the UI is HTTP based, SpecFlow can be used with WebDriver or another browser automation framework to test the UI. This blog post provides an overview of how to get started with SpecFlow, NUnit and WebDriver, http://blogs.lessthandot.com/index.php/enterprisedev/application-lifecycle-management/using-specflow-to/
I am unclear on the API you want to test. If you could provide more information on the specific API and UI you are trying to test, I could possibly provide some code examples or references for you.
Is the API exposed through HTTP?
Is the UI a web, mobile, or desktop
application?
Have you tried to use SpecFlow at all?
SpecFlow doesn't run tests. It simply maps readable language to tests. If their test can be written as a nunit test, then SpecFlow is available to you to use. With no change, here is how it would look.
Scenario: Running 'testname'
Then I execute the test 'TestName'
You would map that to
[Then(#"I execute the test '(.*)'")]
public void ExecuteSpecificTest(string testname)
{
// Using reflection, execute the method listed
}
Obviously you would want to do better than that. You want a given, when, then so you clearly show the setup, action, and then compare expected verses actual result but it isn't necessary. Best practices however is another discussion.
To sum it up, code is code and SpecFlow simply maps to code. You can use WatiN, WebDriver, or anything else to hook into the UI or an API. SpecFlow doesn't care. It simple executes the methods without knowing what's inside.
I am writing integration tests for my iOS app using KIF with the latest Xcode 5. When I run a test, a suite of tests, or all of them, the tests pass with no errors according to the console log, but the test navigator either takes many minutes to show the green pass icon for simple tests like Login, or keeps the spinner running indefinitely. I frequently have to Force Quit Xcode in order to clear the test results. I see this both on the simulator and the device.
I have tried using [tester waitForTimeInterval:3.0]; at the end of each test to no avail.
I have not found any discussions or solutions in all my searches, so I'm hoping to get some answers on this one.
Thanks in advance.
Thanks to Scott Anderson of Walmart Labs for this tip.
The cause of the slow test resolution was NSLog(). I have my own macro version that activates logs when compiled for Debug, which is the case for the Test builds. I log the output of all my server calls, which adds up especially during the registration process. When I disabled that, my tests came back green right after finishing, and no more hanging spinner.
The test navigator must be parsing the console output for XCTest result, slowly. This is my speculation, but would explain the slowness.
I would like to perform the following iPad/iPhone testing scenario automatically:
Tap Edit box A
Type text "abcd"
Verify button B is high-lightened
I understand UIAutomation 4.0 allow you to write a simple JavaScript to perform the above steps. However, UIAutomation does not have test infrastructure ready. For example it lacks testing macros to show if any tests failed and does not have a clear way to run setup and shutdown for each test cases.
That is why I look back to XCode unit testing. Logic tests won't work for me. How about Application tests?
Basically, I am looking for something that can do GUI testing and at the same time has test infrastructure. It is even better if it can be integrated to continuous build environment.
Check out Zucchini. It's just come out and I saw a demo at a recent YOW! conference. It's basically a BDD testing framework that uses coffeescript for scripting and runs against an actual device. It's also fully runnable from CI servers which makes it perfect for agile teams.
I haven't run it myself yet, but it seems to exactly what I'm looking for and No I don't work for PlayUp :-)
Is there any way to measure code coverage with DUnit? Or are there any free tools accomplishing that? What do you use for that? What code coverage do you usually go for?
Jim McKeeth: Thanks for the detailed answer. I am talking about unit testing in the sense of a TDD approach, not only about unit tests after a failure occured. I'm interested in the code coverage I can achieve with some basic prewritten unit tests.
I have just created a new open source project on Google Code with a basic code coverage tool for Delphi 2010. https://sourceforge.net/projects/delphicodecoverage/
Right now it can measure line coverage but I'm planning to add class and method coverage too.
It generates html reports with a summary as well as marked up source showing you what lines are covered (green), which were not (red) and the rest of the lines that didn't have any code generated for them.
Update:
As of version 0.3 of Delphi Code Coverage you can generate XML reports compatible with the Hudson EMMA plugin to display code coverage trends within Hudson.
Update:
Version 0.5 brings bug fixes, increased configurability and cleaned up reports
Update:
Version 1.0 brings support for emma output, coverage of classes and methods and coverage of DLLs and BPLs
I don't know of any free tools. AQTime is almost the defacto standard for profiling Delphi. I haven't used it, but a quick search found Discover for Delphi, which is now open source, but just does code coverage.
Either of these tools should give you an idea of how much code coverage your unit tests are getting.
Are you referring to code coverage from unit tests or stale code? Generally I think only testable code that has a failure should be covered with a unit test (yes I realize that may be starting a holy war, but that is where I stand). So that would be a pretty low percentage.
Now stale code on the other hand is a different story. Stale code is code that doesn't get used. You most likely don't need a tool to tell you this for a lot of your code, just look for the little Blue Dots after you compile in Delphi. Anything without a blue dot is stale. Generally if code is not being used then it should be removed. So that would be 100% code coverage.
There are other scenarios for stale code, like if you have special code to handle if the date ever lands on the 31st of February. The compiler doesn't know it can't happen, so it compiles it in and gives it a blue dot. Now you can write a unit test for that, and test it and it might work, but then you just wasted your time a second time (first for writing the code, second for testing it).
There are tools to track what code paths get used when the program runs, but that is only simi-reliable since not all code paths will get used every time. Like that special code you have to handle leap year, it will only run every four years. So if you take it out then your program will be broken every four years.
I guess I didn't really answer your question about DUnit and Code Coverage, but I think I may have left you with more questions then you started with. What kind of code coverage are you looking for?
UPDATE: If you are taking a TDD approach then no code is written until you write a test for it, so by nature you have 100 test coverage. Of course just because each method is exercised by a test does not mean that its entire range of behaviors is exercised. SmartInspect provides a really easy method to measure which methods are called along with timing, etc. It is a little less then AQTime, but not free. With some more work on your part you can add instrumentation to measure every code path (branches of "if" statements, etc.) Of course you can also just add your own logging to your methods to achieve a coverage report, and that is free (well, expect for your time, which is probably worth more then the tools). If you use JEDI Debug then you can get a call stack too.
TDD really cannot easily be applied retroactively to existing code without a lot of refactoring. Although the newer Delphi IDEs have the ability to generate unit test stubs for each public method, which then gives you 100% coverage of your public methods. What you put in those stubs determines how effective that coverage is.
I use Discover for Delphi and it does the work, for unit testing with DUnit and Functional testing with TestComplete.
Discover can be configured to run from the command line for automation.
As in:
Discover.exe Project.dpr -s -c -m
Discover works great for me. It hardly slows down your application, unlike AQTime. This may not be a problem for you anyway, of course. I think the recent versions of AQTime perform better in this respect.
I've been using Discover" for years, worked excellently up to and including BDS2006 (which was the last pre-XE* version of Delphi i used and still use), but its current opensourced state, it's unclear how to make it work with XE* versions of Delphi. A shame really, because I loved this tool, fast and convenient in almost every way.
So now I'm moving to delphi-code-coverage...