TFS 2015 test case Web UI does not display data in columns - tfs

I have a few users who are not able to see the Details of test cases.
They can see the test plans. The test plan is Active.
They do not have permission to create test cases but they should be able to see them and run them. Some of the test cases in the test plan have a state of Ready. Some have a state of design.
The users have access to see and run tests but their display looks like this:
If they try to run the test case they get the message "You cannot run the selected tests. The test case no longer exists. " which is clearly wrong because I and other users can see the test.
The users in question have visibility to the area and iterations.
I'm stumped and I'm sure it's something simple. Why is all of the column data blank?

This can be an issue with area-path permissions. Even though I see that you have mentioned "The users in question have visibility to the area and iterations", I'd like you to try this simple task:
Ask the user who is not able to see the tests in Test hub to head over to the Work hub and write a simple work item query to see if she can access those test case work items in Work hub. You can find out the test case work item IDs from someone who is able to see the test case work items in this test plan. Or you can write a generic query for all test case work items and see of the test case work items that have gone missing in Test hub are showing up in Work hub.
If you are not able to access the test case work items in Work hub, then its an area path permissions issue. If not, I'll be glad to ask someone on my team to look into this. Please reach out to us at devops_tools#microsoft.com
Thanks,
Manoj

It was indeed a problem with the areas. But not in the project in question.
The test cases in question pulled in from a different project. Everything looked normal if you opened the test case. No errors about invalid area paths. And all the permissions on the project looks ok. The user had all appropriate permissions. If just wasn't until I added up all the pieces I saw the disconnect.,

Related

Automated UITest strategy for 2 mobile applications - User app/Admin app

I am on a development team where we have 2 separate mobile apps. One of the apps is for Users. The other app is for Admin of those users. My main objective is to execute a test case in the Admin app, and then run a test case in the Users app to verify its working properly. How can I approach this?
For example, I want to run a test case in the Admin app that revokes some privilege. I then want to run a test case from the Users app to check to confirm that privilege was revoked.
Maybe this is not a good strategy at all -- but it makes sense for my team because we have 2 apps that work together -- and if we do some function in the Admin app -- we want to see the expected result in the Users app
My plan was to mark each test with a Category, for example, "Privilege"
On Jenkins:
Run "Privilege" Category on the Admin app where I revoke some privilege
Run "Privilege" Category on the User app where I confirm revoked privilege
This seems like an ok test strategy right now. But if I have 20 UITests that means I'll have 20 different Jenkins projects in my dashboard, one for each UITest (per device, per platform). It seems that with 20 UITests I'll end up with over 100 Jenkins projects. Thats not really ideal to me.
Has anybody else come up with a testing strategy where they needed to test 2 separate projects back and forth. I understand that this does not really fall under unit testing - and I may get some vague answers around unit testing and general. But I do believe mobile is a different animal in the UITest world
There are couple of points in your question
do some function in the Admin app -- we want to see the expected result in the Users app
If you need to test such integrations between the two apps, you can go with proper labels for ones that are
mark each test with a Category
in any case, you will need some way to organise your suites. Good way to do so, are test annotations. I think Lazy setup is aplicable in your case. It will set the desired state for all marked tests, when needed.
needed to test 2 separate projects back and forth
End-to-end tests are mandatory, for the most business critical features. My suggestion is to employ Backdoor manipulation. Your other tests should already have covered the simpler cases (e.g. setting a privilege in Admin app), so if you already did exercise this feature, no point of redundancy.
It seems that with 20 UITests I'll end up with over 100 Jenkins projects. Thats not really ideal
You actually don't need a Jenkins project per suite, just configure the tests via CLI arguments and your harness will pick that up for you. What you need is a tag (or platform, or device) to be passed to the runner.
Generally, you do NOT want tests to depend on each other. Have a look at this example:
In the admin app, you set the privilege.
You open the user app.
The privilege should be set, but it's not.
You know that something went wrong, but you don't know whether it's the admin app that's not working or the user app.
Therefore, you should test them independently by mocking (=faking) the backend:
Open the admin app.
Set the privilege.
Ask the mocked backend: Did you receive a call from the admin app to set the privilege?
In an independent UI test for the user app, you do the following:
Set up a fake backend where the privilege is set
Open the user app
See whether the privilege is set.
By separating the tests for both apps, you will know which of the two does not work.
The software that I use at work to do such things is called WireMock but there are others out there, too.

TFS 2017 power tool custom field connected to User Story

Using the power tool I was able to add a new custom field but I want that it take automatically a specific value if present.
Each test case has a "user story" associated but to associate it we have to go in "Tested User Story Section" so I'd like that in the main page appear automatically the ID of the user story.
Sorry for the confusion maybe with the image will be more understandable:
Someone can advice me of what to introduce in the Visual Studio Power Tool please? because I'm finding different example but none explain how to auto-populate with another dynamic item.
If you are asking me why it's because we want to have the "User Story ID" present in the main page so we know directly at which user story the test case is associated:
There is no way to get the User Story ID as a field. Since a single Test Case can test multiple User Stories this makes little sense anyway.
Have you tried adding Requirement based Suits to Test Professional so that you can see all of the Test Cases under a Story? From a testing perspective it makes lots of sense to look from the Story down to Tests rather than Test up to Stories.
Although my screenshots are from the Web App the features at the same in MTM. You can add a Requirement Based Test Suite that relates back to the User Story.
As #MrHinsh mentioned, there is no direct way to get the User Story ID from a field.
From TFS query, it's easy to get the linked User Story, you can select the Work Item and Direct Links in work item query, check the screenshot below:

tfs security group not access test case

Have Group Tester and Provider
Group PROVIDER only edit Bug and not access to Test Case related.
Group TESTER Only CReate and Edit BUG and Test Plan and TEst Case.
how in TFS 2012 to remove access to a PROVIDER group of test cases item?
Thank You
First lets state up front, so as to get it out of the way, that this is ridiculously dysfunctional behaviour. If you don't trust people don't give them permission at all.
That out of the way you can kind of hack this by customising the process template. If you edit the Bug to restrict the permission that can move from state "" to "New" then anyone else will get an error. This works for any work item type.
For Test Suit and Plan you need then"Create Test Plan" permission that isbin "settings|permissions" on the web access.
In TFS 2013.3 Test Suit and Test Plan are now work items so ita a little different.

What would be the best way to test a flow (cross controllers)?

There is a certain flow within our application that makes separate calls with related data. These are in different controllers, but interact with the same user.
We're trying to build a test to confirm that the full flow works as expected. We've built individual tests for the constituent parts, but want a test for the full flow.
E.g., we have a user who checks in to work (checkin) and then builds a widget (widgetize). We have methods that will filter our users between who have checked in, and who have widgetized (and checked in). We can build little objects with FactoryGirl to ensure that the filter works, but we want a test that will have a user check in, another user check in, and the second one widgetize so that we can confirm that our filtering methods only capture the users we want it to capture.
My first thought was to build an rspec test that simply made a direct call to checkin from the widgetize spec, and then confirm the filter methods -- but I discovered that rspec does not allow cross controller calls (or at least I could not figure out how to make it work; posts and gets to that controller were not working). Also, people told me this was very bad practice.
How should I go about testing this?
This article goes over how you can use request-specs to go through integration tests fairly well.
http://everydayrails.com/2012/04/24/testing-series-rspec-requests.html
Basically you want to use a gem like capybara so you can simulate user input and get your tests to run through your app and check to see if everything is going as you expect it to.

How to run a story multiple times with different parameters

I have developed a jBehave story to test some work flow implemented in our system.
Let’s say this story is called customer_registration.story
That story is a starting point of some other more complex work flows that our system supports.
Those more complex work flows are also covered by different stories.
Let’s say we have one of our more complex work flows covered by a customer_login.story
So the customer_login.story will look somehow like below:
Story: Customer Login
Narrative:
In order to access ABC application
As a registered customer
I want to login into the application
Scenario: Successfully login into the application
GivenStories: customer_registration.story
Given I am at the login page
When I type a valid password
Then I am able to see the application main menu
All works perfectly and I am happy with that.
3.The story at point 1 above (customer registration) is something I need to run on different sets of data.
Let’s say our system supports i18n and we need to check that customer registration story runs OK for all supported languages, say we want to test our customer registration works OK with both en-gb and zh-tw
So I need to implement a multi_language_customer_registration.story that will look something like that:
Story: Multi language customer registration
Narrative:
In order to access ABC application
As a potential customer
I want to register for using the application
Scenario: Successfully customer registration using different supported languages
GivenStories: customer_registration.story
Then some clean up step so the customer registration story can run again
Examples:
|language|
|en-gb |
|zh-tw |
Any idea about how I could achieve this?
Note that something like below is not an option as I do need to run the clean up step between runs.
GivenStories: customer_registration.story#{0},customer_registration.story#{1}
Moving the clean up step inside the customer registration story is not an option too as then the login story will stop working.
Thanks in advance.
P.S. As you could guess in reality the stories we created are more complex and it is not an easy task to refactor them, but I am happy to do this for a real gain.
First off, BDD is not the same as testing. I wouldn't use it for every single i18n scenario. Instead, isolate the bit which deals with i18n and unit test that, manually test for a couple and call it done. If you really need more thorough then use it with a couple of languages, but don't do it with all of them - just enough examples to give you some safety.
Now for the bit with the customers. First of all, is logging in and registration really that interesting? Are you likely to change them once you've got them working? Is there anything special about logging in or registration that's particular to your business? If not, try to keep that stuff out of the scenarios - it'll be more of a pain to maintain than it's worth, and if it's never going to change you can just test it once, manually.
Scenarios which show what the user is logging in for are usually more enticing and interesting to the business (you are having conversations with the business, right?).
Otherwise, here are the three ways in which you can set up a context (Given):
By hacking the data (so accessing the database directly)
Through the UI (or controller if you're automating from that level)
By using existing data.
You can also look to see if data exists, and if it doesn't, set it up. So for instance, if your customer is registered and you don't want him to be registered, you can delete his registration as part of setting up the context (running the Given step); or if you need him to be registered and he isn't, you can go through the UI to register him.
Lastly, JBehave has an #AfterScenario annotation which you can use to denote a clean-up step for that scenario. Steps are reusable - you can call the steps of the scenario from within another step in code, rather than using JBehave's mechanism (this is more maintainable anyway IMO) and this will allow you to avoid clearing registration when you log in.
Hope one of these options works for you!
From a tactical standpoint, I would do this:
In your .story file
Given I set my language to {language}
When I type a valid password {pass}
Then I am able to see the application main menu
Examples:
|language|pass|
|en-gb |password1|
|zh-tw |kpassword2|
Then in your Java file,
#Given ("I set my language to $lang")
#Alias ("I set my language to {language}")
// method goes here
#When ("I type a valid password $pwrd")
#Alias ("I type a valid password {pass}")
// method goes here
#Then ("I am able to see the application main menu")
most unit testing frameworks support this.
Look how mstest you can specify DataSource, nunit is similar
https://github.com/leblancmeneses/RobustHaven.IntegrationTests
unfortunately some of the bdd frameworks i've seen try to replace existng unit test frameworks when it should instead work together to reuse the infrastructure.
https://github.com/leblancmeneses/BddIsTddDoneRight
is fluent bdd syntax that can be used with mstest/nunit and works with existing test runners.

Resources