I am looking for a solution for the following scenario.
I am writing the cucumber-capybara tests for the Ruby on Rails application.
I have multiple cucumber feature file with several scenarios (say Scenario B...Z) which are dependent on one particular scenario (say Scenario A). I want to run the scenario only once for all the scenarios which are dependent on it.
So if i run the scenarios (B....Z) I wanted to run the dependent scenario (A) only once. I came across Before hook in cucumber but it will run once for every scenario.
I have one feature file and in that there is one scenario which provisions a server. (Scenario A)
I have multiple other feature files and multiple scenarios (Scenario B..Z) which will run the tests assuming that the server is actually provisioned (Scenario A)
So whenever someone runs the dependent scenarios (Scenario B..Z ), it should check if server is provisioned by some other scenario then it should not try to provision the server. As it will increase the no.of servers.
You could write a feature with scenario A, and then write a Given block which will set the initial expectation for what scenario A is about, Then you can call that before scenario B..Z
So, assuming scenario A is to sign in, then write a single feature for signing in, and then for scenario B to Z, you write a given block that set a user as signed-in, and use that as:
Given user is signed in
...
where Given user is signed in is defined like:
Given /^user is signed in$/ do
# code to sign a user in
end
Related
I am on a development team where we have 2 separate mobile apps. One of the apps is for Users. The other app is for Admin of those users. My main objective is to execute a test case in the Admin app, and then run a test case in the Users app to verify its working properly. How can I approach this?
For example, I want to run a test case in the Admin app that revokes some privilege. I then want to run a test case from the Users app to check to confirm that privilege was revoked.
Maybe this is not a good strategy at all -- but it makes sense for my team because we have 2 apps that work together -- and if we do some function in the Admin app -- we want to see the expected result in the Users app
My plan was to mark each test with a Category, for example, "Privilege"
On Jenkins:
Run "Privilege" Category on the Admin app where I revoke some privilege
Run "Privilege" Category on the User app where I confirm revoked privilege
This seems like an ok test strategy right now. But if I have 20 UITests that means I'll have 20 different Jenkins projects in my dashboard, one for each UITest (per device, per platform). It seems that with 20 UITests I'll end up with over 100 Jenkins projects. Thats not really ideal to me.
Has anybody else come up with a testing strategy where they needed to test 2 separate projects back and forth. I understand that this does not really fall under unit testing - and I may get some vague answers around unit testing and general. But I do believe mobile is a different animal in the UITest world
There are couple of points in your question
do some function in the Admin app -- we want to see the expected result in the Users app
If you need to test such integrations between the two apps, you can go with proper labels for ones that are
mark each test with a Category
in any case, you will need some way to organise your suites. Good way to do so, are test annotations. I think Lazy setup is aplicable in your case. It will set the desired state for all marked tests, when needed.
needed to test 2 separate projects back and forth
End-to-end tests are mandatory, for the most business critical features. My suggestion is to employ Backdoor manipulation. Your other tests should already have covered the simpler cases (e.g. setting a privilege in Admin app), so if you already did exercise this feature, no point of redundancy.
It seems that with 20 UITests I'll end up with over 100 Jenkins projects. Thats not really ideal
You actually don't need a Jenkins project per suite, just configure the tests via CLI arguments and your harness will pick that up for you. What you need is a tag (or platform, or device) to be passed to the runner.
Generally, you do NOT want tests to depend on each other. Have a look at this example:
In the admin app, you set the privilege.
You open the user app.
The privilege should be set, but it's not.
You know that something went wrong, but you don't know whether it's the admin app that's not working or the user app.
Therefore, you should test them independently by mocking (=faking) the backend:
Open the admin app.
Set the privilege.
Ask the mocked backend: Did you receive a call from the admin app to set the privilege?
In an independent UI test for the user app, you do the following:
Set up a fake backend where the privilege is set
Open the user app
See whether the privilege is set.
By separating the tests for both apps, you will know which of the two does not work.
The software that I use at work to do such things is called WireMock but there are others out there, too.
There is a certain flow within our application that makes separate calls with related data. These are in different controllers, but interact with the same user.
We're trying to build a test to confirm that the full flow works as expected. We've built individual tests for the constituent parts, but want a test for the full flow.
E.g., we have a user who checks in to work (checkin) and then builds a widget (widgetize). We have methods that will filter our users between who have checked in, and who have widgetized (and checked in). We can build little objects with FactoryGirl to ensure that the filter works, but we want a test that will have a user check in, another user check in, and the second one widgetize so that we can confirm that our filtering methods only capture the users we want it to capture.
My first thought was to build an rspec test that simply made a direct call to checkin from the widgetize spec, and then confirm the filter methods -- but I discovered that rspec does not allow cross controller calls (or at least I could not figure out how to make it work; posts and gets to that controller were not working). Also, people told me this was very bad practice.
How should I go about testing this?
This article goes over how you can use request-specs to go through integration tests fairly well.
http://everydayrails.com/2012/04/24/testing-series-rspec-requests.html
Basically you want to use a gem like capybara so you can simulate user input and get your tests to run through your app and check to see if everything is going as you expect it to.
I have developed a jBehave story to test some work flow implemented in our system.
Let’s say this story is called customer_registration.story
That story is a starting point of some other more complex work flows that our system supports.
Those more complex work flows are also covered by different stories.
Let’s say we have one of our more complex work flows covered by a customer_login.story
So the customer_login.story will look somehow like below:
Story: Customer Login
Narrative:
In order to access ABC application
As a registered customer
I want to login into the application
Scenario: Successfully login into the application
GivenStories: customer_registration.story
Given I am at the login page
When I type a valid password
Then I am able to see the application main menu
All works perfectly and I am happy with that.
3.The story at point 1 above (customer registration) is something I need to run on different sets of data.
Let’s say our system supports i18n and we need to check that customer registration story runs OK for all supported languages, say we want to test our customer registration works OK with both en-gb and zh-tw
So I need to implement a multi_language_customer_registration.story that will look something like that:
Story: Multi language customer registration
Narrative:
In order to access ABC application
As a potential customer
I want to register for using the application
Scenario: Successfully customer registration using different supported languages
GivenStories: customer_registration.story
Then some clean up step so the customer registration story can run again
Examples:
|language|
|en-gb |
|zh-tw |
Any idea about how I could achieve this?
Note that something like below is not an option as I do need to run the clean up step between runs.
GivenStories: customer_registration.story#{0},customer_registration.story#{1}
Moving the clean up step inside the customer registration story is not an option too as then the login story will stop working.
Thanks in advance.
P.S. As you could guess in reality the stories we created are more complex and it is not an easy task to refactor them, but I am happy to do this for a real gain.
First off, BDD is not the same as testing. I wouldn't use it for every single i18n scenario. Instead, isolate the bit which deals with i18n and unit test that, manually test for a couple and call it done. If you really need more thorough then use it with a couple of languages, but don't do it with all of them - just enough examples to give you some safety.
Now for the bit with the customers. First of all, is logging in and registration really that interesting? Are you likely to change them once you've got them working? Is there anything special about logging in or registration that's particular to your business? If not, try to keep that stuff out of the scenarios - it'll be more of a pain to maintain than it's worth, and if it's never going to change you can just test it once, manually.
Scenarios which show what the user is logging in for are usually more enticing and interesting to the business (you are having conversations with the business, right?).
Otherwise, here are the three ways in which you can set up a context (Given):
By hacking the data (so accessing the database directly)
Through the UI (or controller if you're automating from that level)
By using existing data.
You can also look to see if data exists, and if it doesn't, set it up. So for instance, if your customer is registered and you don't want him to be registered, you can delete his registration as part of setting up the context (running the Given step); or if you need him to be registered and he isn't, you can go through the UI to register him.
Lastly, JBehave has an #AfterScenario annotation which you can use to denote a clean-up step for that scenario. Steps are reusable - you can call the steps of the scenario from within another step in code, rather than using JBehave's mechanism (this is more maintainable anyway IMO) and this will allow you to avoid clearing registration when you log in.
Hope one of these options works for you!
From a tactical standpoint, I would do this:
In your .story file
Given I set my language to {language}
When I type a valid password {pass}
Then I am able to see the application main menu
Examples:
|language|pass|
|en-gb |password1|
|zh-tw |kpassword2|
Then in your Java file,
#Given ("I set my language to $lang")
#Alias ("I set my language to {language}")
// method goes here
#When ("I type a valid password $pwrd")
#Alias ("I type a valid password {pass}")
// method goes here
#Then ("I am able to see the application main menu")
most unit testing frameworks support this.
Look how mstest you can specify DataSource, nunit is similar
https://github.com/leblancmeneses/RobustHaven.IntegrationTests
unfortunately some of the bdd frameworks i've seen try to replace existng unit test frameworks when it should instead work together to reuse the infrastructure.
https://github.com/leblancmeneses/BddIsTddDoneRight
is fluent bdd syntax that can be used with mstest/nunit and works with existing test runners.
I am using Ruby on Rails 3.2.2, cucumber-rails-1.3.0, rspec-rails-2.8.1, capybara-1.1.2 and factory_girl-2.6.3. I have a Scenario that tests the user sign up like the following:
Scenario: I register an user
When I fill in "Name" with "Foo"
And I fill in "Email" with "foo_bar#email.com"
And I fill in "Password" with "test_password"
And I click button "Register"
Then I should be redirected to the homepage
and I am trying to state a new Feature (within a separated file) where to implement a Scenario that, by using a token, should test the signed up user confirmation process to be properly completed (note: in reality, this process involves an email message delivering but I do not want to test if that email was sent; just ignore it for this question). Since I would like to test exclusively the confirmation process, I thought to implement a new Feature/Scenario where to state something like Given I am a registered user for which to run the Scenario: I register an user.
Scenario: I confirm an user
Given I am a registered user # Here I would like to run the 'Scenario: I register an user'
When I go to the confirmation page
And I enter the confirmation token
Then ...
In few words, I thought to call an entire Scenario within another Scenario.
Even if I can use the FactoryGirl gem to "create"-"register" an user, I prefer to proceed with the above approach because I'd tests ("practically" speaking, those "represent" "invisible people" that, managed by the Selenium gem, open my browser and perform actions) to behave as much as possible like "real people" behave, and so those test steps to implicitly follow all "real" steps that should be made in reality in order to sign up a new user. By following this approach I can test whether or not the application "interiors" are properly working.
Is it a correct approach? If no, how could/should I proceed?
One of the principles of well designed tests is that you exercise a particular feature in all of its various forms, but beyond that presume it to be working within the context of other tests that are exercising other features. That is, once you've tested that users can register, you should not be re-testing the same thing unless you're testing it in a different way.
The phrasing "Given I am a registered user" is something that implies the user has already successfully registered. You shouldn't go testing registration at this point since that's outside the scope of the test you're trying to perform.
Each test should have a mandate and it should stick to it. If you don't limit yourself your tests will spiral into a uselessly inter-dependent mess where changing one thing requires changing every other thing hooked in to it.
The syntax to run a scenario within a step is:
Given /^I authenticate successfully$/ do
Given 'I am on the login page'
And 'I fill in "Email" with "eddie#hotmail.com"'
And 'I fill in "Password" with "secret"'
When 'I press "Login"'
Then 'I should see "Welcome"'
end
Final thoughts: In general, have your Feature file not have any "fill in" --- just have it describe the actions, and have your steps do the dirty work.
Let's say I've got a feature called create_account, which calls a number of steps to create the account.
Now I want to make a more elaborate feature test where having an account is a really just a step in a bigger scenario. Do I need to recode my original feature as steps or can I call the original feature somehow in my new scenario?
You cannot call a feature or scenario from a step. But probably, what you want can be accomplished using Background (steps that will be executed before every scenario in a feature, see https://github.com/cucumber/cucumber/wiki/Background):
Feature: Different ways to create account
Background:
# Some steps to create account
Scenario: Create account
# Nothing
Scenario: Create account and do something
# Something else
Or else, you can pack all the step of the initial scenario into a complex step and use it.