Can we create specflow Table instance at runtime and add values in it? - specflow

I am creating automation solution where I will connect to JIRA to get details like No of test cases, pass / fail count etc & generate a report to show these details.
SO I thought, I will specFlow to increase readability and allure reports to generate my reports. Hence, I am trying to create Specflow Table instance runtime and attach to report. Since, I am not passing any data to step, specflow table instance is not created.
Eg:
Given I am connected to Jira
And I get test cases details
[Given(#"T get test cases details")]
public void GivenGetTestDetails()
{
// Add data from Jira to Table
}
And Generate report that has data like
Is there a way to create Table instance at runtime and add values in it then attach to allure report?

Related

Is there a way to run a BigQuery stored procedure from Google Data Studio, with parameters?

I am trying to setup a Google Data Studio report for data that only needs to be accessed intermittently, such that refreshing the data on an interval in a table would be more expensive than just refreshing it as needed. For this reason, I have written a procedure in BigQuery that SELECTs the required data (given a number of parameters), with the intent to call the procedure on an as needed basis from Data Studio.
I thought it must be possible to add a BigQuery connector and create a custom query with the contents e.g.
CALL `my-project`.MyDataset.myProcedure(#myparameter);
And parameters defined for the custom query, such as:
But, when I save the custom query I am getting an error such as:
However, if I do not use a parameterized value, I am able to save the custom query. Such as:
CALL `my-project`.MyDataset.myProcedure("myvalue");
Does anyone know if there is a way to save a custom query, with a called procedure, with parameters?

Unit Test Cases with Mocking repository on real data not fake data

I have a basic understanding of Mocking and fake objects, and passing the fake data for Unit test case methods. But is it possible that we can use actual data return by our repositories instead of creating our own fake data to our Unit test methods.
I am using NUnit for a .Net MVC 5 application. As shown below in sample code line :
mockType.Setup(x=>x.GetUserDetails().Returns(
new UserDetail(){ id = 1 , name = "John"}
);
So I need to return the actual data return from method GetUserDetails, instead to create the fake data (as we did in above example).
I need to get the user details from DB table instead of creating fake UserDetail as we did in above example. Please give your advice and let me know if you need more information.
Thanks in advance.
Tests which access other layers of your application called "Integration tests" or "Acceptance tests" if you testing full "pipeline" of your application UI -> Database -> UI.
In your particular case you need to have empty database with exactly same schema as used in the production.
Insert data for test case
Execute test
Assert results
Clean database
Accessing real database, insert data, clean database after every tests will make tests slower - to be slow is OK for integration or acceptance tests.
For keeping testing and production database schema in sync, you should have some maintaining scripts/logistics of your choice.

Sharing data between scenarios Specflow

I currently have about 15 scenarios in one feature file and want to share data between them. I thought context injection would work and it is working between steps within a single scenario but I can't get it to pass data from one scenario to another. How does everyone else achieve this ?
Short answer:
No one does this, as it's a Bad Idea™
Long answer:
If you have data valid for the whole feature, place it in the feature context. But this data can't be modified in one scenario and accessed in another.
The tests will be executed in an order determined by your test runner. Different runners may choose different orders. Execution order may be changed from one release of a runner to the next. Having temporal coupling between your tests, or implicit dependencies causes other problems as well, such as what happens if I want to run a test on its own? Now it will fail as the previous tests have not been run first. What If I want to run the tests in parallel? Now I can't as the tests have dependencies which need to be run first.
So what can I do?
My suggestion would be to use background steps (or explicit steps in your givens) to set up the data your individual scenario requires. Specflow makes reusing these steps, or have these steps reuse other steps, fairly simple. So if you need a customer and a product to create an order and you have scenarios like this:
Scenario: Creating a customer
Given a create a new customer called 'bob'
When I query for customers called 'bob'
Then I should get back a customer
Scenario: Creating a product
Given a create a new product called 'foo'
And 'foo' has a price of £100
When I query for products called 'foo'
Then I should get back a product
And the price should be £100
Scenario: customer places an order
Given I have a customer called 'bob'
And I have a product called 'foo' with a price £100
When 'bob' places an order for a 'foo'
Then an order for 1 'foo' should be created
here the last scenario creates all the data it needs. It can reuse the same step (with a different Given attribute) as Given a create a new customer called 'bob' and it can have a new step And I have a product called 'foo' with a price £100 which just calls the two existing steps Given a create a new product called 'foo'
And 'foo' has a price of £100
This ensures that the test is isolated and does not have any dependencies.
you can create a variable static IDictionary<String, Object> globalData in another class say Global.cs
Now, in scenario 1: save any object
Globle.globalData.Set("Key", Object);
in scenario 2: retrieve the object by its key and cast it into previous type
var dataFromScen1 = Global.globalData.Get("Key");
in this way you can use data from scenario 1 into scenario 2 but you will face issues during parallel execution

Organising code in Specflow steps

I understand from Techtalk that coupling step definitions to features is an anti-pattern.
However I am wondering how to organise my step definitions so that I can easily see in the code what steps go together.
For example should I create a code file for each Feature and a separate file for steps that are shared? or should I have a code file for a set of features?
I tend to organize step definitions in a couple of ways, depending on how big the step definition files get.
Separate data and UI steps
I have lots of Given some thing exists in the database steps, so I usually throw these into a file call DataSteps.cs. I also have lots of Then something exists in the database steps and those go into DataAssertionSteps.cs.
All of my When I fill in some form field steps go in FormSteps.cs. Anytime I need Then something in the form is enabled|disabled or asserting that form fields have a certain value, I throw those into FormPresentationSteps.cs.
Separate Steps by Model
Sometimes my step definition files get very large, and I start moving step definitions into files related to a certain model. Say I have a Person model. I might create a PersonSteps.cs file that is split into three regions:
[Binding]
public class PersonSteps
{
#region Given
// Given a person exists with the following attributes:
#endregion
#region When
// When something something about a Person
#endregion
#region Then
// Then a person should exist with the following attributes:
#endregion
}
Basically I start out with these files:
DataSteps.cs - The Givens for setting up test data
DataAssertionSteps.cs - The Thens for asserting that data exists correctly in the database
PresentationAssertionSteps.cs - Just general stuff making sure the UI looks as itshould
FormSteps.cs - Steps for manipulating forms
FormPresentationAssertionSteps.cs - Making sure form fields have the proper values, are enabled/disabled correctly. Validation messages show up, etc.
GeneralSteps.cs - A poorily named catch-all for stuff that doesn't fit in the above files. Maybe I should rename it "AndIShallCallItSomethingManagerSteps.cs"?

Adding New SSRS Report Design - Sales Quotation Report

I've added a new report design to the Sales Quotation report (AX2012).
The only way I could get the design to print out was to either amend the Standard Report Design or amend the Class SalesFormLetterReport_Quotation and change the Report Name in method getDefaultPrintJobSettings to my new report design. This is because the Quotation report uses Print Management settings and always uses the default report design.
My question is, if I want to print a different design based on some data criteria, i.e. a different customer type, how could I do this?
The only thing I can think of is to change the SalesFormLetterReport_Quotation class and override the method 'loadPrintSettings'.
I tried adding a new conditional setting in the Print Management setup but this still defaults to the default report design.
Take a look at:
\Classes\PrintMgmtDocType\getDefaultReportFormat
\Data Dictionary\Tables\PrintMgmtReportFormat\Methods\populate
\Data Dictionary\Tables\SRSReportDeploymentSettings\Methods\populateTableWithDefault
These methods has all sorts of report layouts hard coded. Really nasty shit!
To assign a different design based on customer, you can modify the Controller class which opens the SalesQuotation Report.
Edit the main method in SalesQuotationController class.
Write logic to assign the design based on your specific requirements.
You may edit the following lines on the SalesQuotationController\main method.
formLetterController.initArgs(_args, ssrsReportStr(SalesQuotation,Report));

Resources