I'd prefer that Dart not log the current test description every second the test is under execution. If some test takes 5 seconds, Dart is logging the test description 5 times.
I would prefer the logging only represent the one time the test is attempted to be executed.
This is such strange default behavior when compared to other test suites, but I can't find anyone talking about the problem or offering solutions.
Apologies for answering my own question. Reporters (not logging) is the key search term.
There are four reporters available in the default Dart test package. You set the reporter by adding the --reporter <reporter_name> flag on the test command line.
https://github.com/dart-lang/test#selecting-a-test-reporter
The default is "compact" which logs continuously. The one I want to use is called "expanded".
Related
There are various methods to track the timeout in MSTest and other framework.
But I did not find a similar way in behave where a particular step has to fail if it exceeds certain time limit.
What I tried?
Using enviromental controls/ hooks https://behave.readthedocs.io/en/stable/tutorial.html#environmental-controls
Here we have hooks before and after exceution of steps/ scenarios/feature but nothing to track during code execution.
fixtures
https://behave.readthedocs.io/en/stable/fixtures.html#
Even here I faced the same problem with fixtures. Couldn't use fixture during execution of a step.
Please Note: The timeout inside step impl function is not the expectation here. The expectation is to track each and every step execution and to make the step fail if it takes (lets say) more than 15 mins to complete.
When running our test suite we perform a re-run which gives us 2 HTML reports at the end. What I am looking to do is have one final report so that I can then share it with stakeholders etc.
Can I merge the 2 reports so that in the first run a test had failed but in the second run it had passed, the report shows the test has passed?
I basically want to merge the reports to show a final outcome of the test run. Thanks
By only showing the report that passed you'd be throwing away a valuable piece of information: that there is an issue with the test suite making it flaky during execution. It can be something to do with the architecture or design of a particular test, or maybe the wait/sleep periods for some elements. Or, in some cases, the application we're testing has some sort of issue that a lot of times goes unchecked.
You should treat a failing report with as much respect as a passing one. I'd share with the stakeholders both reports and a short analysis of why the tests are failing in the first one(s), or why do they usually fail, and a proposal/strategy to fix the failure.
Regarding the merging of the reports, it can be done. You can, via a script that takes both reports, maybe extract the body of each, and, element by element only do a copy the passing one if the other is failing, or if both are failing, copy a failing one. But it looks like that would be an effort to hide a possible problem, and not to fix it from the ground up.
Edit:
There is at least one lib that can help you achieve this,
ReportBuilder, or the Java equivalent:
ReportBuilderJava.
I was wondering how to create a golden master approach to start creating some tests for my MVC 4 application.
"Gold master testing refers to capturing the result of a process, and
then comparing future runs against the saved “gold master” (or known
good) version to discover unexpected changes." - #brynary
Its a large application with no tests and it will be good to start development with the golden master to ensure the changes we are making to increase the test coverage and hopefully decrease the complexity in the long don't break the application.
I am think about capturing a days worth of real world traffic from the IIS logs and use that to create the golden master however I am not sure the easiest or best way to go about it. There is nothing out of the ordinary on the app lots controllers with post backs etc
I am looking for a way to create a suitable golden master for a MVC 4 application hosted in IIS 7.5.
NOTES
To clarify something in regards to the comments the "golden master" is a test you can run to verify output of the application. It is like journalling your application and being able to run that journal every time you make a change to ensure you have broken anything.
When working with legacy code, it is almost impossible to understand
it and to write code that will surely exercise all the logical paths
through the code. For that kind of testing, we would need to
understand the code, but we do not yet. So we need to take another
approach.
Instead of trying to figure out what to test, we can test everything,
a lot of times, so that we end up with a huge amount of output, about
which we can almost certainly assume that it was produced by
exercising all parts of our legacy code. It is recommended to run the
code at least 10,000 (ten thousand) times. We will write a test to run
it twice as much and save the output.
Patkos Csaba - http://code.tutsplus.com/tutorials/refactoring-legacy-code-part-1-the-golden-master--cms-20331
My question is how do I go about doing this to a MVC application.
Regards
Basically you want to compare two large sets of results and control variations, in practice, an integration test. I believe that the real traffic can't exactly give you the control that I think you need it.
Before making any change to the production code, you should do the following:
Create X number of random inputs, always using the same random seed, so you can generate always the same set over and over again. You will probably want a few thousand random inputs.
Bombard the class or system under test with these random inputs.
Capture the outputs for each individual random input
When you run it for the first time, record the outputs in a file (or database, etc). From then on, you can start changing your code, run the test and compare the execution output with the original output data you recorded. If they match, keep refactoring, otherwise, revert back your change and you should be back to green.
This doesn't match with your approach. Imagine a scenario in which a user makes a purchase of a certain product, you can not determine the outcome of the transaction, insufficient credit, non-availability of the product, so you cannot trust in your input.
However, what you now need is a way to replicate that data automatically, and the automation of the browser in this case cannot help you much.
You can try a different approach, something like the Lightweight Test Automation Framework or else MvcIntegrationTestFramework which are the most appropriate to your scenario
I'm new to iOS, xcode, KIF framework, Objective C. And my first assignment is to write test code using KIF. It sure seems like it would be a lot easier if KIF had conditional statements.
Basically something like:
if ([tester existsViewWithAccessibilityLabel:#"Login"]) {
[self login];
}
// continue with test in a known state
When you run one test at a time KIF exits the app after the test. If you run all your tests at once, it does not exit in between tests - requiring testers be very, very careful of the state of the application (which is very time consuming and not fun).
Testing frameworks typically don't implement if conditions as they already exist in their native form.
You can look at the testing framework's source code to find how it does the "If state checks". This will teach you to fish on how to do most things you may want to do (even if is not always a good idea to do them during a functional test). You could also look here: Can I check if a view exists on the screen with KIF?
Besides, your tests should be assertive in nature as follow the following workflow:
given:
the user has X state setup
(here you write code to assertively setup the state)
It is OK and preferred to isolate your tests and setup
the "given" state (e.g. set login name in the model directly without going
through the UI) as long as you have covered that behavior in another test.
When:
The user tries to do X
(here you tap something etc..)
Then:<br>
The system should respond with Z
(here you verify the system did what you need it)
The first step in every test should be to reset the app to a known state, because it's the only way to guarantee repeatable testing. Once you start putting conditional code in the tests themselves, you are introducing unpredictability into the results.
You can always try the method tryFindingViewWithAccessibilityLabel:error: which returns true if it can find it and false otherwise.
if ([tester tryFindingViewWithAccessibilityLabel(#"login") error: nil]){
//Test things
}
I have many cucumber feature files, each consists of many scenarios.
When run together, some of them fails.
When I run each single test file, they passes.
I think my database is not correctly clean after each scenario.
What is the correct process to determine what is causing this behavior ?
By the sound of it your tests are depening upon one another. You should be trying to get each indervidual test to do what ever set up is required for that indervidual test to run.
The set up parts should be done during the "Given" part of your features.
Personally, to stop the features from becoming verbose and to keep them close to the business language that they where written in, i sometimes add additional steps that are required to do the setup and call them from the steps that are in the feature file.
If this makes sence to you
This happens to me for different reasons and different times.
Sometimes its that a stub or mock is invoked in one scenario that screws up another, but only when they are both run (each is fine alone).
The only way I've been able to solve these is debugging while running enough tests to get a failure. You can drop the debugger line in step_definitions or call it as a step itself (When I call the debugger) and match that up to a step definition that just says 'debugger' as the ruby code.