Sharing data between scenarios Specflow - specflow

I currently have about 15 scenarios in one feature file and want to share data between them. I thought context injection would work and it is working between steps within a single scenario but I can't get it to pass data from one scenario to another. How does everyone else achieve this ?

Short answer:
No one does this, as it's a Bad Idea™
Long answer:
If you have data valid for the whole feature, place it in the feature context. But this data can't be modified in one scenario and accessed in another.
The tests will be executed in an order determined by your test runner. Different runners may choose different orders. Execution order may be changed from one release of a runner to the next. Having temporal coupling between your tests, or implicit dependencies causes other problems as well, such as what happens if I want to run a test on its own? Now it will fail as the previous tests have not been run first. What If I want to run the tests in parallel? Now I can't as the tests have dependencies which need to be run first.
So what can I do?
My suggestion would be to use background steps (or explicit steps in your givens) to set up the data your individual scenario requires. Specflow makes reusing these steps, or have these steps reuse other steps, fairly simple. So if you need a customer and a product to create an order and you have scenarios like this:
Scenario: Creating a customer
Given a create a new customer called 'bob'
When I query for customers called 'bob'
Then I should get back a customer
Scenario: Creating a product
Given a create a new product called 'foo'
And 'foo' has a price of £100
When I query for products called 'foo'
Then I should get back a product
And the price should be £100
Scenario: customer places an order
Given I have a customer called 'bob'
And I have a product called 'foo' with a price £100
When 'bob' places an order for a 'foo'
Then an order for 1 'foo' should be created
here the last scenario creates all the data it needs. It can reuse the same step (with a different Given attribute) as Given a create a new customer called 'bob' and it can have a new step And I have a product called 'foo' with a price £100 which just calls the two existing steps Given a create a new product called 'foo'
And 'foo' has a price of £100
This ensures that the test is isolated and does not have any dependencies.

you can create a variable static IDictionary<String, Object> globalData in another class say Global.cs
Now, in scenario 1: save any object
Globle.globalData.Set("Key", Object);
in scenario 2: retrieve the object by its key and cast it into previous type
var dataFromScen1 = Global.globalData.Get("Key");
in this way you can use data from scenario 1 into scenario 2 but you will face issues during parallel execution

Related

Azure Data Factory: foreach loop with batchCount and item() property

I have a Data Factory pipeline with a ForEach loop where I have two activities: one to call an HTTP endpoint to retrieve a file, one to store this file into an Azure storage account.
I have set the Batch Count to 5, to have the ability to speed up the process.
I use the "item()" property in the activities inside the for each. As far as I see, when there are parallel executions, it sounds like the item() property is not relevant because there are mixed executions at the same time and the item() property could be modified by another branch of the for each loop.
What I'm looking for is the ability to read the item() value in the first step of "for each" loop, store it into a variable that will have the "for each" current loop scope, then use the content of the variable in the latter stages of the loop.
Or maybe there is a better way to manage my use case, any ideas?
I think the only solution is to perform the traversal sequentially. In ADF variables are shared by threads in the heap. Multi-threaded execution of tasks cannot be controlled manually. So we need to check the Sequential option.
I got the same issue today and was going through answers here on SO.
Took me some time but I was able to fix the issue.
Assume Table_Name is the item I want to loop in foreach.
If I use #item.Table_Name for dynamic purpose then it gets me into an issue where different instances of foreach loop shares same value, that is wrong and get all failures afterward.
If I add a set Variable task as my first step in Foreach loop to capture the #item.Table_Name value then it works as expected. Later in other steps I was able to use the Table_Name as variable at all different places which needs a dynamic input.
As shown in snapshot, add a set variable and it works.

Model to create multiple varied instances of another model in Rails 4 without unnessary coupling

What is considered best practice for having one ActiveRecord model which, depending on the values it is assigned, generates instances of a separate model? I'm concerned that my models 'know' too much about each other and are becoming difficult to write tests for and am looking for the correct 'Rails way' of tackling this.
Essentially the first model is a specification that determines the number of instances and the attributes assigned to those instances of another model. I want to be able to change either that specification or the individual instances freely.
I am working on creating a budgeting app for my company. The idea is to track cash flows as a series of transactions represented by my Transaction model.
I don't want the user to have to manually enter each transaction (but I want them to be able to edit each one later if needed), so I've created CashRevenue and CashExpense models where the user can enter things like a start date, end date and growth rate which gets persisted to the database and then triggers a non-ActiveRecord model TransactionBuilder which processes the info and calls Transaction.new() the correct number of times and sets the attributes correctly.
To implement this, I have CashRevenue and CashExpensecall TransactionBuilder.build_transactions(self) which assigns instance variables, calls a method to remove any existing transactions associated with the calling object (in case it's an update action).
Since revenue transactions are treated differently in the app than expenses, I'm checking the type by doing #cashflow_type = cashflow_item.class.to_s before the create.
It then creates the transactions as follows:
def self.add_transactions
number_of_occurences.times do
add_transaction
end
end
def self.add_transaction
transaction = Transaction.new()
if #cashflow_type == "CashExpense"
transaction.cash_expense_id = #cashflow_item.id
else
transaction.revenue_id = #cashflow_item.id
transaction.planned_number_of_customers = get_number_of_customers
transaction.planned_arpu_amount = get_arpu_amount
end
transaction.planned_amount = get_transaction_amount
transaction.planned_date = get_transaction_date
transaction.edit_method = "SYSTEM"
transaction.user_id = #cashflow_item.user_id
if transaction.save!
#last_transaction = transaction
end
Is there a better way to structure my app to achieve the same tasks while making my models less tightly coupled? I'm expecting this application to grow quickly in complexity so want to keep it as modular as possible.
Railroady diagram of my applications models: https://github.com/jamesconnolly93/investablemvp/blob/master/doc/models_brief.svg
Any help would be hugely appreciated!

Organising code in Specflow steps

I understand from Techtalk that coupling step definitions to features is an anti-pattern.
However I am wondering how to organise my step definitions so that I can easily see in the code what steps go together.
For example should I create a code file for each Feature and a separate file for steps that are shared? or should I have a code file for a set of features?
I tend to organize step definitions in a couple of ways, depending on how big the step definition files get.
Separate data and UI steps
I have lots of Given some thing exists in the database steps, so I usually throw these into a file call DataSteps.cs. I also have lots of Then something exists in the database steps and those go into DataAssertionSteps.cs.
All of my When I fill in some form field steps go in FormSteps.cs. Anytime I need Then something in the form is enabled|disabled or asserting that form fields have a certain value, I throw those into FormPresentationSteps.cs.
Separate Steps by Model
Sometimes my step definition files get very large, and I start moving step definitions into files related to a certain model. Say I have a Person model. I might create a PersonSteps.cs file that is split into three regions:
[Binding]
public class PersonSteps
{
#region Given
// Given a person exists with the following attributes:
#endregion
#region When
// When something something about a Person
#endregion
#region Then
// Then a person should exist with the following attributes:
#endregion
}
Basically I start out with these files:
DataSteps.cs - The Givens for setting up test data
DataAssertionSteps.cs - The Thens for asserting that data exists correctly in the database
PresentationAssertionSteps.cs - Just general stuff making sure the UI looks as itshould
FormSteps.cs - Steps for manipulating forms
FormPresentationAssertionSteps.cs - Making sure form fields have the proper values, are enabled/disabled correctly. Validation messages show up, etc.
GeneralSteps.cs - A poorily named catch-all for stuff that doesn't fit in the above files. Maybe I should rename it "AndIShallCallItSomethingManagerSteps.cs"?

Modifing Desire2Learn Groups using the Valance REST API

I'm a bit confused on how we are supposed to update a group using the Valence API.
According to documentation, "Name,Code & Description" are required for updating, but the FETCH group block only returns "GroupID,Name, Description and Enrollments". If Group Code is not returned in the fetch, what value are we supposed to use in the update block if we only want to update the name? Since description is provided I can just feed that back, but what am I supposed to do about code ... just lose that data?
Perhaps there a way to send an update that will update only specific fields in the update block? When I omit fields from the update block I currently receive an error (ie in the case I only want to update the name).
The Code property for Groups is intended to be the "org-defined code" for the group (for a course offering, this is often called the "course code"), the one that might appear in the organization's SIS system, for example.
Because groups in Desire2Learn's Learning Suite are considered "org units", when you create one, you need to provide it with an appropriate org-defined code (Code) -- if your organization doesn't use org-defined codes for groups, then you can decide to systematically use some other kind of data by convention (a name, a descriptive string, and so on). You are correct that's inconvenient for the Fetch form of the GroupData structure not to provide this value for you, but the value will be accessible to callers through the organization structure routes (because the newly created group is just an special kind of org unit).
In Learning Suite v10.2 (LP API v1.3+) and later, you can use a single GET call to fetch back the properties for an org unit. In versions prior to v10.2, you will need to fetch the list of parents for the group to get a parent org unit ID, or if you already know the org unit ID for the course offering that owns the group you can use that; then you use that org unit ID to fetch its list of children: your group will be in that list. The OrgUnit and OrgUnitProperties structures both contain the Code property that you need here.

SpecFlow Dependent Features

I have 2 Features 1. User Creation 2. User Enrollment both are separate and have multiple scenarios. 2nd Feature is dependent on 1st one, so when I run the 2nd Feature directly then how this feature checks the 1st Feature is already run and user created. I am using database in which creation status column (True/False) tells if user has been created or not. So, I want if i run the 2nd feature before that it runs the 1st feature for user creation.
In general, it is considered a very bad practice to have dependencies between tests and a specially features. Each test/scenario should have its own independent setup.
If your second feature depends on user creation, you could just add another step to you scenarios, e.g. "When such and such user is created."
If all scenarios under one feature share common content, you could move it up under a Background tag. For example:
Feature: User Enrollment
Background
Given such and such user
Scenario
When ...
And ...
Then...
Scenario
When ...
And ...
Then...
I used reflection
Find all Types with a DescriptionAttribute (aka Features)
Find their MethodInfos with a TestAttribute and DescriptionAttribute (aka Scenarios)
Store them to a Dictionary
Call them by "Title of the Feature/Title of the Scenario" with Activator.CreateInstance and Invoke
You have to set the (private) field "testRunner" according to your needs of course.

Resources