ruby cucumber testing practices - ruby-on-rails

I have many cucumber feature files, each consists of many scenarios.
When run together, some of them fails.
When I run each single test file, they passes.
I think my database is not correctly clean after each scenario.
What is the correct process to determine what is causing this behavior ?

By the sound of it your tests are depening upon one another. You should be trying to get each indervidual test to do what ever set up is required for that indervidual test to run.
The set up parts should be done during the "Given" part of your features.
Personally, to stop the features from becoming verbose and to keep them close to the business language that they where written in, i sometimes add additional steps that are required to do the setup and call them from the steps that are in the feature file.
If this makes sence to you

This happens to me for different reasons and different times.
Sometimes its that a stub or mock is invoked in one scenario that screws up another, but only when they are both run (each is fine alone).
The only way I've been able to solve these is debugging while running enough tests to get a failure. You can drop the debugger line in step_definitions or call it as a step itself (When I call the debugger) and match that up to a step definition that just says 'debugger' as the ruby code.

Related

What are the best practices for structuring an XCTest UI Test Suite?

I am setting up a test suite for an iOS app. I am using Xcode's XCTest framework. This is just a UI test suite. Right now I have a test target with one file TestAppUITests. All of my test cases reside in this file (first question: should all of my test cases be in here or do I need more files?)
Within this file, I have a bunch of test cases that work as if it's a user using the app. Logging in/ Create account then login -> Navigate around app -> Check if UI elements loaded -> Add additional security like secondary email address -> logout.
How should these be ordered?
I've researched all around, found some gems here and there but still have questions. Within the test target, should you have multiple files? I just have the one within my UITest target.
My other big question is a bit harder to explain. Each time we run the test suite from the beginning the app starts in a state where you are not logged in, so, for example, in order to test something like navigate WITHIN the app, I need to run that test to login first. Right now I have it setup so that the login test runs once, then all other tests after it, then ends with logout. But that one file TestAppUITests is getting very long with tons of test cases. Is this best practice?
Soo. Lets divide this into more parts:
1/ Should all of my test cases be in here or do I need more files?
Well - your tests are same as any other app code, you have. Do you have all of the app code in one file? Probably not, so a good practice is to divide your tests into more classes (I do it in the way of what they test - LoginTests class, UserProfileTests class etc etc.).
To go further with this - I have test classes and methods divided into separate files - f.e. I have a method, that will do a login in UI test, so I have the method in UITestCase+Login extension (UITestCase is a class, that is extended by all these UITestCase+Something extensions) and I have the test, that will do the login in LoginTests in which I am calling the login method from the UITestCase+Login extension.
However - you don't necessarily need more test classes - if you decide to have all of your UI tests in one class, thats your choice. Everything's gonna work, but having the tests and methods that these tests use in separate files is just a good practice for future development of tests and methods.
2/ ... Add additional security like secondary email address... How should these be ordered?
Order them into methods and call these in tests.
This is my method for expecting some UI message, when I use invalid login credentials:
func expectInvalidCredentialsErrorMessageAfterLoggingWithInvalidBIDCredentials() {
let alertText = app.alerts.staticTexts[localizedString("mobile.account.invalid_auth_credentials")]
let okButton = app.alerts.buttons[localizedString("common.ok")]
wait(
until: alertText.exists && okButton.existsAndHittable,
timeout: 15,
or: .fail(message: "Alert text/button are not visible.")
)
okButton.tap()
}
And this is my use of it in test:
func testTryToLoginWitMissingBIDAndExpectError() {
let inputs = BIDLoginInputs(
BID: "",
email: "someemail#someemail.com",
departureIATA: "xxx",
dayOfDeparture: "xx",
monthOfDeparture: "xxx",
yearOfDeparture: "xxx"
)
loginWithBIDFromProfile(inputs: inputs)
expectInvalidCredentialsErrorMessageAfterLoggingWithInvalidBIDCredentials()
}
You can see, that tests are very readable and that they (almost fully) consists of methods, which are re-usable in more tests.
3/ Within the test target, should you have multiple files?
Again - this is up to you, however having everything in one file is not really good for maintenance and future development of these tests.
4/ ... Each time we run the test suite from the beginning the app starts in a state where you are not logged in...Right now I have it setup so that the login test runs once, then all other tests after it, then ends with logout...
Not a good approach (in my humble point of view ofc.) - put the functionality into methods (yes, I repeat myself here :-) ) and divide the test cases into more files (ideally by the nature of their functionality, by "what they do").
Hope, this helped you, I struggled a lot with the same questions, when I was starting with iOS UI tests too.
Oh. And btw - my article on medium.com about advanced tactics and approaches to iOS UI tests with XCTest is coming out in a few days, I can put a link to it, once it is out - that should help you even further.
Echoing this answer, it is against best practices to store all of your tests for a complex application in a single file, and your tests should be structured so that they are independent of each other, only testing a single behaviour in each test.
It may seem counter-intuitive to split everything up into many tests which need to go through re-launching your app at the beginning of every test, but this makes your test suite more reliable and easier to debug as the number of unknowns in a test is minimised by having smaller, shorter tests. Since UI tests take a relatively long time to run, which can be frustrating, you should try to minimise the amount that you need by ensuring your app has good unit/integration test coverage.
With regards to best practices on structuring your XCTest UI tests, you should look into the Screenplay or Page Object models. The Page Object model has been around a long time, and there are many posts around it although many tend to be focused on Selenium or Java-based test frameworks. I have written posts on both the Page Object model and the Screenplay model using Swift and XCTest, they should help you.

Merging results from different Cucumber HTML Reports

When running our test suite we perform a re-run which gives us 2 HTML reports at the end. What I am looking to do is have one final report so that I can then share it with stakeholders etc.
Can I merge the 2 reports so that in the first run a test had failed but in the second run it had passed, the report shows the test has passed?
I basically want to merge the reports to show a final outcome of the test run. Thanks
By only showing the report that passed you'd be throwing away a valuable piece of information: that there is an issue with the test suite making it flaky during execution. It can be something to do with the architecture or design of a particular test, or maybe the wait/sleep periods for some elements. Or, in some cases, the application we're testing has some sort of issue that a lot of times goes unchecked.
You should treat a failing report with as much respect as a passing one. I'd share with the stakeholders both reports and a short analysis of why the tests are failing in the first one(s), or why do they usually fail, and a proposal/strategy to fix the failure.
Regarding the merging of the reports, it can be done. You can, via a script that takes both reports, maybe extract the body of each, and, element by element only do a copy the passing one if the other is failing, or if both are failing, copy a failing one. But it looks like that would be an effort to hide a possible problem, and not to fix it from the ground up.
Edit:
There is at least one lib that can help you achieve this,
ReportBuilder, or the Java equivalent:
ReportBuilderJava.

How can I create a golden master for mvc 4 application

I was wondering how to create a golden master approach to start creating some tests for my MVC 4 application.
"Gold master testing refers to capturing the result of a process, and
then comparing future runs against the saved “gold master” (or known
good) version to discover unexpected changes." - #brynary
Its a large application with no tests and it will be good to start development with the golden master to ensure the changes we are making to increase the test coverage and hopefully decrease the complexity in the long don't break the application.
I am think about capturing a days worth of real world traffic from the IIS logs and use that to create the golden master however I am not sure the easiest or best way to go about it. There is nothing out of the ordinary on the app lots controllers with post backs etc
I am looking for a way to create a suitable golden master for a MVC 4 application hosted in IIS 7.5.
NOTES
To clarify something in regards to the comments the "golden master" is a test you can run to verify output of the application. It is like journalling your application and being able to run that journal every time you make a change to ensure you have broken anything.
When working with legacy code, it is almost impossible to understand
it and to write code that will surely exercise all the logical paths
through the code. For that kind of testing, we would need to
understand the code, but we do not yet. So we need to take another
approach.
Instead of trying to figure out what to test, we can test everything,
a lot of times, so that we end up with a huge amount of output, about
which we can almost certainly assume that it was produced by
exercising all parts of our legacy code. It is recommended to run the
code at least 10,000 (ten thousand) times. We will write a test to run
it twice as much and save the output.
Patkos Csaba - http://code.tutsplus.com/tutorials/refactoring-legacy-code-part-1-the-golden-master--cms-20331
My question is how do I go about doing this to a MVC application.
Regards
Basically you want to compare two large sets of results and control variations, in practice, an integration test. I believe that the real traffic can't exactly give you the control that I think you need it.
Before making any change to the production code, you should do the following:
Create X number of random inputs, always using the same random seed, so you can generate always the same set over and over again. You will probably want a few thousand random inputs.
Bombard the class or system under test with these random inputs.
Capture the outputs for each individual random input
When you run it for the first time, record the outputs in a file (or database, etc). From then on, you can start changing your code, run the test and compare the execution output with the original output data you recorded. If they match, keep refactoring, otherwise, revert back your change and you should be back to green.
This doesn't match with your approach. Imagine a scenario in which a user makes a purchase of a certain product, you can not determine the outcome of the transaction, insufficient credit, non-availability of the product, so you cannot trust in your input.
However, what you now need is a way to replicate that data automatically, and the automation of the browser in this case cannot help you much.
You can try a different approach, something like the Lightweight Test Automation Framework or else MvcIntegrationTestFramework which are the most appropriate to your scenario

Why no conditional statements in functional testing frameworks like KIF?

I'm new to iOS, xcode, KIF framework, Objective C. And my first assignment is to write test code using KIF. It sure seems like it would be a lot easier if KIF had conditional statements.
Basically something like:
if ([tester existsViewWithAccessibilityLabel:#"Login"]) {
[self login];
}
// continue with test in a known state
When you run one test at a time KIF exits the app after the test. If you run all your tests at once, it does not exit in between tests - requiring testers be very, very careful of the state of the application (which is very time consuming and not fun).
Testing frameworks typically don't implement if conditions as they already exist in their native form.
You can look at the testing framework's source code to find how it does the "If state checks". This will teach you to fish on how to do most things you may want to do (even if is not always a good idea to do them during a functional test). You could also look here: Can I check if a view exists on the screen with KIF?
Besides, your tests should be assertive in nature as follow the following workflow:
given:
the user has X state setup
(here you write code to assertively setup the state)
It is OK and preferred to isolate your tests and setup
the "given" state (e.g. set login name in the model directly without going
through the UI) as long as you have covered that behavior in another test.
When:
The user tries to do X
(here you tap something etc..)
Then:<br>
The system should respond with Z
(here you verify the system did what you need it)
The first step in every test should be to reset the app to a known state, because it's the only way to guarantee repeatable testing. Once you start putting conditional code in the tests themselves, you are introducing unpredictability into the results.
You can always try the method tryFindingViewWithAccessibilityLabel:error: which returns true if it can find it and false otherwise.
if ([tester tryFindingViewWithAccessibilityLabel(#"login") error: nil]){
//Test things
}

Why don't people access database in Rspec?

I often see the code which uses mock in Rspec, like this:
describe "GET show" do
it "should find and assign #question" do
question = Question.new
Question.should_receive(:find).with("123").and_return(question)
get :show, :id => 123
assigns[:question].should == question
end
end
But why they don't add a Question with id => 123 in database, retrieve it by get, and destroy it? Is this a best practice? If I don't follow the rule, will something bad happen?
When you write a behavioral test (or a unit test), you're trying to test only a specific part of code, and not the entire stack.
To explain this better, you are just expressing and testing that "function A should call function B with these parameters", so you are testing function A and not function B, for which you provide a mock.
This is important for a number of reasons:
You don't need a database installed on every machine you build your code, this is important if you start using build machines (and/or continuous integration) in your company with hundreds of projects.
You get better test results, cause if function B is broken, or the database is not working properly, you don't get a test failure on function A.
Your tests run faster.
It's always a pain to have a clean DB before each test. What if a previous run of your tests was stopped, leaving on the database a Question with that id? You'll probably get a test failure because of duplicate id, while in reality the function is working properly.
You need a proper configuration before running your test. This is not such an incredible problem, but it's much better if tests can run "out of the box", without having to configure a database connection, a folder of temporary test files, an SMTP server for testing email stuff, etc...
A test that actually test the entire stack is called "end to end testing", or "integration testing" (depending on what it tests). These are important as well, for example a suite of tests without mock database can be used to see if a given application can run safely of a different DB than the one used during development, and eventually fix functions that contain offending SQL statements.
Actually, many people do, including me. Generally speaking, since tests are there to check behavior, it can feel a bit unnatural to insert database entries to some people.
Question.new would be enough because it goes through the valid methods of rails anyway, so many people tend to use them, also because they are faster.
But, indeed, even if you start using factories, there will be times that you will probably inserting data to your testing environment as well. I personally don't see anything wrong with this.
Overall, in some situations were the testing suite is really large, it can be quite an advantage not to save database entries. But if speed is not your top concern, i would say that you would not really have to worry on how the test looks, as long as it is well constructed and to the point.
BTW, you do not need to destroy test data, it's done automatically after the test ends. So, unless you are checking on the actual delete methods, avoid doing that explicitly.

Resources