I want to test a service interface using spec flow.
The service is designed to run tests in order to test other services. What I wanted to do is send a few request messages and get a response back and validate.
Lets say I have this Data I want to test out using the service contracts below. .
var testData = new TestData
{ Definition = new TestDefinition
{
Steps = new List<TestStep>(){ new TestStep{StepId = "1", Actions = new List<TestAction>()
{new TestAction{ActionType = TestActionType.SendRequest, Parameters = new Dictionary<string, string>(){{"MessageId", "1"}}},
}
} ,
new TestStep{StepId="2",Actions=new List<TestAction>
{new TestAction
{ActionType=TestActionType.GetResponse,
Parameters=new Dictionary<string, string>()
{{"MessageId","2"},{"Wait","30000"}} }}}}
},
Requests = new List<Request>()
{new Request(){
MessageId = "1",
Content = new List<Element>()
{ new Element{Key= "TransactionID.Value", Value = "XX0001"},}}},
Response = new List<Response>{new Response
{MessageId="2", ElementValidations = new List<ValidationRule>
{new ValidationRule
{Element=new Element{ Key = "TransactionID.Value",,
Value="XXX0006"},Description="Failed to match [Transaction ID] value",}}}
Here are the DataContracts:
[DataContract]
public class TestData
{
[DataMember]
public TestDefinition Definition { get; set; }
[DataMember]
public List<Request> Requests { get; set; }
[DataMember]
public List<ExpectedResponse> Response { get; set; }
}
[DataContract]
public class TestDefinition
{
[DataMember]
public string TestId { get; set; }
[DataMember]
public List<TestStep> Steps { get; set; }
}
[DataContract]
public class Request
{
[DataMember]
public string TransactionId { get; set; }
[DataMember]
public string TransactionType { get; set; }
[DataMember]
public List<Element> Content { get; set; }
}
[DataContract]
public class TestStep
{
public TestStep()
{
Timeout = 60000;
}
[DataMember]
public ICollection<TestAction> Actions { get; set; }
}
Using spec flow using specific keywords: Scenario, Given, When, and Then. What Can I set as a Scenario, Given, When, and Then.
Here is what I am thinking to do:
Before scenario [Db set up]
Given ["Id is found on (.*) table"]
When [Prepare data to send in a table] Ex. The request data above
Then[validate the incoming response by looping]
Can some one help me design a good way to approach this.
Thank you.
For a definition of the Given When and Thens I'd suggest you start by reading through http://dannorth.net/introducing-bdd
In general,
Given
Given is the things that you can guarantee. In reality this often means that Givens are where you create instances and inject values to get things into the correct state. If you are using mocking, this is definitely created here too.
When
These are the steps that make your test perform the actions that move you from the state that you were Given over to the state that you expect to be in with a successful test.
Then
This is when you check that the code has met your expectations.
And finally,
Scenario
This is where we build up our Givens, Whens and Thens into a cohesive test.
And now for your example
Sorry if this is unclear, but I don't think I have enough detail to really appreciate how you want this design to work.
1.Before scenario [Db set up]
You suggest that in your BeforeScenario you might setup your database. Well that's one way. Personally I would not use a database inside a test framework, I'm pretty sure that MS/Oracle etc, don't need us to test that the db stores and retrieves data, and you will find that you have problems with multiple tests running in parallel, tests execution order and being able to reset it back to a known state. Mock it instead.
On this note, you are also defining WCF contracts here, and once again we can trust WCF to get us a proxy and use a channel to communicate with the other end. So during most of your testing you don't have to set up all of that complex architecture, but instead just call a instance of the objects directly. Even if we are using SpecFlow we can still create different levels of test, from single unit tests (just one class, and often done in parallel via nUnit/MSTest), up through collaborations of classes (which is the most common feature after all), all the way to a final integration level shakedown. Just don't expect the integration level tests to pass every time if you are creating a whole WCF stack (as you end up with port overlaps etc.).
2.Given ["Id is found on (.*) table"] At this point we shouldn't be testing anything. You should be putting the system into the state so it is ready to start the test. Since you haven't defined anything about the database, its also a really poor implementation detail to add to your scenarios.
Instead I would suggest that this is the point when the server should be put into the state described in your first code block in the question above. You might say Given the standard starting state or something that is more meaningful e.g. Given that we can handle XX0001 requests, but inside the [Binding] then you will be setting this up.
3.When [Prepare data to send in a table] Ex. The request data above
Probably not. Again you haven't defined your Contract that has the method to execute the test (although you have alluded to it with validate the incoming response by looping). If you have an asynchronous design, (i.e. send the TestDefinition to be tested, check a flag to see if the test has finished, and when it has then pull back the result), then the When is everything from sending the data in until you have a result, i.e. if you have to loop (there are better ways), do it here too.
4.Then[validate the incoming response by looping]
Yes (apart from the looping). Here we have a result and can now verify all of its attributes/properties to make sure that it is what we expected.
If this is your first SpecFlow test suite, I would recommend that you start a little more simply. Pick up just one aspect of the whole process and define your scenarios for that. Build up towards the more complicated example as you get more competent with it.
Don't forget that BDD is a process that helps you to discover the design and architecture that fits your needs exactly. You might find that what you don't need quite such an abstract framework that can handle every case just yet, but at least when you do make the switch over, then you still have all the tests to prove that the new framework works for the existing code as well.
Good luck.
Related
I have a user class in my framework and I want to initial the first time when login.
public class UserClass
{
public void Initial(string userId, string userName)
{
UserId = useriId;
UserName = userName;
}
public string UserId { get; private set; }
public string UserName { get; private set; }
}
I want life this class depend on
HttpContext.Current.Request.Cookies[FormsAuthentication.FormsCookieName]
I'm not sure if your Initial method is meant to be a constructor for UserClass or an init function. You might approach the solution differently depending on that. In either case, there's three ways I'd consider approaching this:
Explicitly (constructor)
Build a wrapper service exposing the values from cookies and make your UserClass consume that. It's the simplest, least magical option that will be easy for anyone on your team to grasp.
DynamicParameters (constructor)
Use the DynamicDependencies feature to pass the cookie values through to the resolution pipeline. It ties you to Windsor and may not be super-obvious to every person on your team so you need to consider that.
OnCreate (init)
Use the OnCreate feature to initialise the object post-construction. Pretty much the same idea as option 2, but used on an already-constructed object. This can work applying either explicit or implicit approach (that is 1. or 2. from above)
Like with everything, it's all a trade-off between what is technically possible and what makes sense for your code architecture and team.
I have a problem with an OData controller that is a little unusual compared to the others I have. It is the first one working completely from memory - no database involved.
The returned entity is:
public class TrdRun {
[Key]
public Guid Identity { get; set; }
public TrdTrade [] Trades { get; set; }
TrdTrade is also an entity set (which if queries goes against a database). But in this particular case I want to return all trades associated as active from a run, and I an do so WITHOUT going to the database.
My problem? The following code:
[ODataRoute]
public IEnumerable<Reflexo.Api.TrdRun> Get(ODataQueryOptions options) {
var instances = Repository.TrdInstance.AsEnumerable();
var runs = new List<Reflexo.Api.TrdRun>();
foreach (var instance in instances) {
runs.Add(Get(instance.Identifier));
}
return runs;
}
correctly configures runs to have the trades initialized - but WebApi decides to swallow them.
What is a way to configure it to return the data "as given" without further filtering? I know about the AutoExpandAttribute (Which I would love to avoid - I do not want the API classes marked with OData attributes), but I have not enabled Query, so I would expect the return data to be returned as I set it up.
The value of the Trades property is not being serialized because the default behavior of ODataMediaTypeFormatter is to not follow navigation properties, regardless of what is in memory. You could override this behavior by using $expand in the query string of the request, or AutoExpandAttribute on the Trades property in the class definition, but both approaches require decorating your controller method with EnableQueryAttribute.
If you don't want to do any of that, you can still programmatically specify auto-expansion of Trades in your service configuration as follows:
// Let builder be an instance of ODataModelBuilder or a derived class.
builder.EntityType<TrdRun>().CollectionProperty(r => r.Trades).AutoExpand = true;
Minor issue: With the programmatic approach, if the client requests full metadata (e.g., odata.metadata=full in the Accept header), the OData serializer will not include full metadata in the auto-expanded objects.
I have a distributed system where users can make changes into one single database. To illustrate the problem, let's assume we have the following entities:
public class Product{
public int Id{get;set;}
public List<ProductOwner> ProductOwners{get;set;}
}
public class ProductOwner{
public int ProductId { get; set; }
[ForeignKey("ProductId")]
[Inversroperty("ProductOwners")]
public Product Product{ get; set; }
public int OwnerId { get; set; }
[ForeignKey("OwnerId")]
public Owner Owner{ get; set; }
}
public class Owner{
public int Id{get;set;}
}
Let's also assume we have two users, UserOne and UserTwo connected to the system.
UserOne adds Product1 and assigns Owner1 as an owner. As a result, a new ProductOwner1 is created with key=[Product1.Id, Owner1.Id]
UserTwo does the same operation, another instance ProductOwner2 with key=[Product1.Id, Owner1.Id] is created. This will result in an EF exception on the server side, which is expected, as a row with key=[Product1.Id, Owner1.Id] already exists in the database.
Question
The issue above can be partly resolved by having some sort of real time data refresh on both UserOne and UserTwo machines (I am already doing this) and running a validation task on the server to ignore and not save entities that are already in the DB.
The remaining issue is how to tell Breeze on 'userTwo' machine to mark ProductOwner2 as saved and change its state from Added to Unchanged?
I think this is an excellent question and has been raised enough that I wanted to chime in on how I would do it given the above scenario in hopes others can find a good way to accomplish this from a Breeze.js perspective as well. This answer doesn't really address server logic so it is incomplete at best.
Step 1 - Open a web socket
First and foremost we need some way to tell the other connected clients that there has been a change. SignalR is a great way to do this if you are using the ASP.NET MVC stack and there are a bunch of other tools.
The point is that we don't need to have a great way of passing data down and forcing it in to the client's cache, we just need a lightweight way to tell the client that some information has changed and if they are concerned with this to refresh something. My recommendation in this area would be to use a payload that tells the client either what entity type and Id changed or give a resource to the client to let them know what collection of entities to refresh. Two examples of a JSON payload that would work well here -
{
"entityChanges": [
{
"id": "123",
"type": "product",
"new": false
},
{
"id": "234",
"type": "product",
"new": true
}
],
collectionChanges: [
{
"type": "productOwners"
}
]
}
In this scenario we are simply telling the client that the products with Ids of 123 and 234 have changed, and that 234 happens to be a new entity. We aren't pushing any data about what properties have changed to the client as that is their responsibility to decide whether to refresh or requery for data. There is also the possibility of telling the client to refresh a whole collection like in the second array but I will focus on the first example.
Step 2 - Handle the changes
Ok we got a payload from our web socket that we need to pass to some analyzer to decide whether to requery. My recommendation here is to check if that entity exists in cache, and if so, refresh it. If a flag comes down in the JSON that says it is a new entity we probably also need to requery it. Here is some basic logic -
function checkForChanges (payload) {
var parsedJson = $.parse(payload);
$.each(parsedJson.entityChanges, function (index, item) {
// If it is a new entity,
if (item.new === true) {
// Go get it from the database
manager.fetchEntityByKey(item.type, item.id)
.then(fetchSucceeded).fail(fetchFailed);
} else {
// Check local cache first
var localentity = manager.getEntityByKey(item.type, item.id);
// And if we have a local copy already,
if (localentity) {
// Go refresh it from the database
manager.fetchEntityByKey(item.type, item.id)
.then(fetchSucceeded).fail(fetchFailed);
}
}
}
}
Now there is probably some additional logic in your application that need to be handled but in a nut shell we are -
Opening up a lightweight connection to the client to listen for changes only
Creating a handler for when those changes occur
Applying some logic on how to query for or refresh the data
Some considerations here are you may want to use different merge strategies depending on various conditions. For instance if the entity already has changes you may want to preserve changes, where as if it is a entity that is always in a state of flux you may want to overwrite changes.
http://www.breezejs.com/sites/all/apidocs/classes/MergeStrategy.html
Hope this provides some insight, and if it doesn't answer your question directly I apologize for crowding up the answers : )
Would it be possible to catch the entity framework / unique key constraint error on the breeze client and react by creating a new entity manager (using the createEmptyCopy method), loading the relevant ProductOwner records and using them to determine which ProductOwner records in the original entityManager need to be set "unchanged" using the entity's entityAspect's setUnchanged method. Once this "synchronization" is done the save changes can be retried.
In other words, the client is optimistic the save will succeed but can recover if necessary. The server remains oblivious to the potential race condition and has no custom code.
A brute force approach, apologies if I'm stating the obvious.
I'm following Steve Sanderson's example from this ASP.NET MVC book on creating a model by hand instead of using diagramming tools to do it for me. So in my model namespace I place a class called MySystemModel with something like the following in it
[Table(Name="tblCC_Business")]
public class Business
{
[Column(IsPrimaryKey=true, IsDbGenerated=false)]
public string BusinessID { get; set; }
// this is done because Business column and Business have interfering names
[Column(Name="Business")] public string BusinessCol { get; set; }
}
This part of it is all fine. The problem however is returning multiple result sets from a stored procedure, but mixing and matching SQL with LINQ modelling. We do this because the LINQ to SQL translation is too slow for some of our queries (there's really no point arguing this point here, it's a business requirement). So basically I use actual SQL statements along with my LINQ models in my "repository" like so:
public IEnumerable<MyType> ListData(int? arg)
{
string query = "SELECT * FROM MyTable WHERE argument = {0}";
return _dc.ExecuteQuery<MyType>(query, arg);
//c.GetTable<MyType>(); <-- this is another way of getting all data out quickly
}
Now the problem I'm having is how to return multiple result sets as I'm not extending DataContext, like so:
public ContractsControlRepository()
{
_dc = new DataContext(ConfigurationManager.ConnectionStrings["MyConnectionString"].ToString());
}
This link describes how multiple result sets are returned from stored procedures.
[Function(Name="dbo.VariableResultShapes")]
[ResultType(typeof(VariableResultShapesResult1))]
[ResultType(typeof(VariableResultShapesResult2))]
public IMultipleResults VariableResultShapes([Parameter(DbType="Int")] System.Nullable<int> shape)
{
IExecuteResult result = this.ExecuteMethodCall(this, ((MethodInfo)(MethodInfo.GetCurrentMethod())), shape);
return ((IMultipleResults)(result.ReturnValue));
}
So how do I turn this into something that can be used by my repository? I just need to be able to return multiple result sets from a repository which contains DataContext, and doesn't extend it. If you copied and pasted the previous extract into a repository like I've got it will just state how ExecuteMethodCall isn't available, but that's only available if you extend DataContext.
Resources
Guy Berstein's Blog
Every time I ask a question that has been hindering me for days on end I end up finding the answer within minutes. Anyway, the answer to this issue is that you have to extend DataContext in your repository. If like me you're worried about having to specify the connection string in every single controller then you can change the constructor in the repository class to something like this:
public ContractsControlRepository()
: base(ConfigurationManager.ConnectionStrings["AccountsConnectionString"].ToString()) { }
This way when you instantiate your repository the connection is set up for you already, which gives you less to worry about, and actually centralizes specifying the connection string. Extending DataContext also means you have access to all of the protected methods such as ExecuteMethodCall used for calling stored procedures and bringing back, if you will, multiple result sets.
For a given report, the user will want to have multiple filtering options. This isn't bad when the options are enumerations, and other 'static' data types, however things can get silly fast when you need a select list that is populated by fields stored in a table in the backend.
How do you handle this scenario? I find myself constantly reshaping the View data to accommodate the additional filter fields, but it really is starting to be a bit much tracking not only the selected options, but also the options themselves...
is there not a better way?
I’m currently building out a new reporting section for one of our products at work and am dealing with this same issue. The solution I’ve come up with so far, though it hasn’t been implemented yet so this is still a work in progress, is along the lines of this.
There will be a class that will represent a report filter which will contain some basic info such as the label text and a list of option values.
public enum DisplayStyle
{
DropDown,
ListBox,
RadioList,
CheckList,
TextBox
}
public class FilterOption
{
public string Name { get; set; }
public string Value { get; set; }
public bool Selected { get; set; }
}
public class ReportFilter
{
public string Title { get; set; }
public DisplayStyle Style { get; set; }
public List<FilterOption> Options { get; set; }
}
And then my model will contain a list of these option classes that will be generated based on each report’s needs. I also have a base report class that each report will inherit from so that way I can handle building out the option lists on a per report basis and use one view to handle them all.
public class ReportModel
{
public string Name { get; set; }
public List<ReportFilter> Filters { get; set; }
public DateTime StartDate { get; set; }
public DateTime EndDate { get; set; }
}
Then inside my view(s) I’ll have some helper methods that will take in those option classes and build out the actual controls for me.
public static string ReportFilter(this HtmlHelper htmlHelper, DisplayStyle displayStyle, FilterOption filterOption)
{
switch (displayStyle)
{
case DisplayStyle.TextBox:
return string.Format("<input type=\"text\"{0}>", filterOption.Selected ? (" value=\"" + filterOption.Value + "\"") : string.Empty);
break;
...
}
}
My route would look like this
Reports/{reportID}/start/{startDate}/end/{endDate}/{*pathInfo}
All reports have a start and end date and then optional filters. The catchall parameter will have lists of filter values in the form of “Customer/1,4,7/Program/45,783”. So it’ll be like a key/value pair in list form. Then when the controller loads it’ll parse out those values into something more meaningful.
public static Dictionary<string, string> RouteParams(string pathInfo)
{
if (string.IsNullOrEmpty(pathInfo))
{
return new Dictionary<string, string>();
}
var values = new Dictionary<string, string>();
// split out params and add to the dictionary object
return values;
}
Then it will pass them off to the report class and validate them to make sure they’re correct for that report. Then when the options are loaded for that report anything that’s been set in the URL will be set to Selected in the ReportOption class so their state can be maintained. Then the filter list and other report data will be added to the model.
For my setup some filters will change when another filters selection changes so there will be some AJAX in here to post the data and get the updated filter options. The drilldown will work sort of like the search options at amazon or newegg when you narrow your search criteria.
I hope that all makes sense to someone beside me. And if anyone has some input on improving it I’d be happy to hear it.
You could go and retrieve the data asynchronously on the screen using jQuery and JsonResults from your MVC application, this is how we populate all of our lists and searches in our applications. I have an example of how it is done here.
This way the view data is loaded on demand, if they don't use the extra filters then they don't have to get the view data and if one selection relates to another then it's clear which set of data you need to retrieve.
Another option, though I don't like this one as much but jQuery solution may not suit you, is to have your model object for your view contain all the view data so that all you need to do is set the single model object and all the lists are loaded directly and strongly typed. This will simplify the view and the back end code because it will be more clear that for this view the only thing you need is a complete version of this model object.
For example if you had two lists for combo boxes then your model might look like:
public class MyViewMode
{
public int MyProperty { get; set; }
public string SomeString { get; set; }
List<string> ComboListA { get; set; }
List<string> ComboListB { get; set; }
}
Hope that makes sense, if not please comment and I'll expand on it.
Ad-hoc filtering on reports is indeed a tricky issue especially when you want to show a custom user interface control based on the data type, do validation, make some filters to be dependent on one another and others not, etc.
One thing I think that is worth considering is the old "build vs buy" issue here. There are specialized tools out there for ad-hoc reporting that do provide a UI for ad-hoc filters help with this such as the usual suspects Crystal Reports, Microsoft's Reporting Services, or our product ActiveReports Server. In ActiveReports Server we support cascading prompts (where available values in prompts depend on one another) and make it easy for anyone, even non-technical business users to modify the prompts (assuming they have permissions obviously). More information about using prompts in ActiveReports Server is here. ActiveReports Server is also, all managed .NET code, and provides ASP.NET controls and web services that allows you to integrate it into your web apps.
Scott Willeke
Product Manager - ActiveReports Server
GrapeCity inc.