How do you counter BDD-scripting anti-pattern in Specflow? - specflow

This is an example of one of our acceptance tests:
Feature: Add an effect to a level
In order to create a configuration
As a user
I want to be able to add effects to a level
Scenario: Add a new valve effect to a level
Given I have created a level named LEVEL123 with description fooDescription
And I am on the configuration page
When I click LEVEL123 in the level tree
And I expand the panel named Editor Panel
And I click the Add Valve Effect button
And the popup named ASRAddVal appears
And I click the Add new button
And I fill in these vertical fields
| field name | value |
| Name | Effect123 |
Then I should see the following texts on the screen
| text |
| Effect added : EFFECT123 |
We feel that this is getting a bit to verbose and we want to hear how you reduce steps in Specflow. From what I've read so far, creating specific non-reusable steps is not recommended, so what is considered "best practice" when doing this in SpecFlow?
Update:
What I am trying to say is that I've learned that you should try to create generic steps in order to re-use them across multiple tests. One way to do that is to parametrize your steps, for example: "Given I have created a level named ..", but the parameterization also introduces verbosity. I want to end up with something like Bryan Oakley suggests in his answer, but I just can't see how I can do that without creating steps which are very specific to each tests. This again means that I'll end up with a lot of steps which reduces maintainability. I looks like SpecFlow has some way of defining abbreviating steps by creating a file which inherits a base class called "Steps", but this still introduces new steps.
So to summarize things; show me a good approach for ending up with Bryan Oakleys answer which is maintainable.

I would simplify it to something like this:
Scenario: Add a new valve effect to a level
Given I have created a new level
When I add a new valve effect with the following values
| field name | value |
| Name | Effect123 |
Then I should get an on-screen confirmation that says "Effect added: Effect123"
The way I approached the problem was to imagine that you are completely redesigning the user interface. Would the test still be usable? For example, the test should work even if there is no "Add" button in the redesign, or you no longer user a popup window.

You could try wording them generically and use parameters.
Given i have create a new: Level
the ':' is only so you can identify the parameter. This means you would have one generic entry point for a step that needs to create a new something. Its up to the step then to look at the parameter of Level and create a new Level
Also try to come up with a naming conversion everyone can use. It should be easy to discover what steps have already been created so you don't get duplicate similar steps.

Can I suggest that maybe the code you are testing should go into unit tests. Maybe what you mean by "test specific" are individual unit tests that are not covered by your acceptance tests.
Just a thought :)

Related

Deletion of module columns/attribute not visible on a given View

I have been given the task of creating a DXL script. First problem is that I have never used DXL before, even though I have many years experience with DOORS itself. I have been surfing the Net to seek guidance on my particular problem. I also have a few specimen DXL scripts for reference.
My new client requires that for each View of a given Module, of which there are many Views, new "reduced" Modules are to be produced reflecting each View.
By "reduced", I mean that these new Modules are to contain nothing that isn't actually needed for that View., i.e. Columns, Attributes etc. These new Modules will only have the single View.
So, the way forward as I see it, is to take copies of the single master Module, one for each View, rename those copies to reflect a given Master Module/Required View, select that required View in the given copy Module and then delete everything that is not needed by that View, i.e. available Columns, Attributes etc.
This would be simple if I had the required DXL knowledge, which I am endeavouring to pick up as fast as I can.
If at all possible, this script has to be generic and be able to work upon any of the master Module copies to produce the associated "reduced" Module reflecting a particular View.
The client aims to use the script periodically for View archiving (I know, that's the way they want it).
Clarification
Some clarification of what I believe is required, given the following text from my original question:
If at all possible, this script has to be generic and be able to work upon any of the master Module copies to produce the associated "reduced" Module reflecting a particular View.
So, say there are ten views of the master Module, outside of the DXL script, I would copy the master Module ten times, renaming each copy to reflect each of the ten views. Unless you know different, each of those ten copies will reflect the same “Absolute Number”s as are in the master Module, so no problem there?
So, starting with the first of the copied Modules, each named to reflect the View it will eventually represent, its View would be set from the ten Views available to it, that which matches its title.
The single generic DXL script would then be run against that first copy Module, the aim being to delete everything not actually needed for that view, i.e. Attributes, Columns etc. Would some kind of purging command be required in the script for any aforementioned deleted items?
The single generic DXL script would then delete ALL views from that copy Module. The log that is produced when running the script also needs capturing, but I’m not sure whether this should be done from within the script, if possible or as a separate manual task outside of the script.
The aforementioned (indented) process would then be repeated, using the same generic script, against the remaining nine copied Modules. The intension is to leave us with ten copy Modules, each one reflecting one of the ten possible Views, with each one containing only the Attributes, Columns etc. required for that View.
Creating a mirror of a module with this approach is not so easy IMO. Think e.g. about "Absolute Number". If the original module contains the numbers 15 (level 1), 2000 (level 2), 1 (level 1), you will have to create 2000 objects, purge 1997 of them and move them to the correct place.
There is a "duplicate" tool at https://www.ibm.com/developerworks/community/forums/html/topic?id=43862118-113d-4eac-b3f1-21d3b73959d1 which tries to do this, but as stated there, this script is said not to work correctly in all situations.
So, I would rather use the approach "string clipCopy (Item i); string clipPaste(Folder folderRef)". Should be faster and less error prone. But: all Out-Links will also be copied with this method, you will probably have to delete these after the copy or else the link target module(s) will have lots of In-Links.
The problem is still not so easy to solve, as every view might have DXL columns that rely on some or other attribute, and it might contain DXL attributes which again might rely on sth else. I doubt that there is a way to analyze DXL code "on the fly" and find out which columns may be deleted.
Perhaps a totally different approach would be feasible: open each view and create an export to Excel, this way you will get rid of any dynamic dependencies. Then re-import the excel sheet to a new DOORS module. You will still have the "Absolute Number" problem, but perhaps you can make a deal that you will have a pseudo attribute "Original Absolute Number" and disregard the "new" "Absolute Number"'
Quite a big task for a DXL beginner....
Update: On second thought, perhaps you might want to combine these approaches
agree with your employer that you will use an alternative attribute for Absolute Number
use a loop like Russel suggested, when creating objects remember that objects might have to be created "below" or "after" its predecessor or sibling
for DXL attributes do not copy the DXL code but the actual current value of the object
for DXL columns create pseudo attributes _ and create a new view that uses these pseudo attributes instead of the original value
Copying the entire module, then deleting everything not in that view, seems worse than just copying the things you need from each particular view.
I would take the following as the outline of your program:
for view in main module do {
for column in view do {
Find attribute for each column and store (possibly in a skip list?)
Store name of column
}
create new module
create needed types / attributes in new module
create new view in new module
for object in main module {
create object in new module
for attribute in main module {
check if attribute is in new module {
copy info from old object to new
}
}
}
}
Each of these for X in y loops should be in the DXL reference manual in some for or another.
If you need more help, let me know!

Localize workflow task title in alfresco activiti

I followed this tutorial to create a custom alfresco activiti workflow: http://ecmarchitect.com/alfresco-developer-series-tutorials/workflow/tutorial/tutorial.html
I tried to externalize the contained strings by creating .properties files and made them known in the xyz-context.xml. While this is working I face a problem with changing the title of a worfklow task.
I use the following sampleWorkflow.properties file:
sampleWf.task.confirmTask.title=Confirm this, with a title which is different than the task name
sampleWf.task.confirmTask.description=Confirm please
The bpmn-snippet for this tasks, is configured like this:
<userTask id="confirmTask" name="Confirm" activiti:assignee="${bpm_assignee.properties.userName}" activiti:formKey="samplewf:customTypeTask"></userTask>
My question is
Why only the description of the workflow tasks change, but not the title?
The above localization works, when I don't use the task ID but it's property like this:
sampleWf.task.samplewf_customTypeTask.title=This changes the title
If this the only possibility I'd need to deploy a lot of custom types just for naming purposes. Can't I reuse types across workflows and just change the title (name) by this configuration?
Please refer to this link in order to have a better idea on how strings could be localized in a workflow in Alfresco :
<workflow_prefix>_<workflow_name>.workflow.[title|description]
<workflow_prefix>_<workflow_name>.node.<node_name>.[title|description]
<workflow_prefix>_<workflow_name>.node.<node_name>.transition.<transition_name>.[title|description]
<workflow_prefix>_<workflow_name>.task.<task_prefix>_<task_name>.[title|description]
where:
<workflow_prefix> is the workflow model namespace prefix
<workflow_name> is the workflow name
<node_name> is the name of a node within the workflow
<transition_name> is the name of a node transition within the workflow
<task_prefix> is the task namespace prefix
<task_name> is the task name
<transition_name> is the workflow transition name
Which suggests you should be putting something like :
sampleWf_<workflow-name>.task.sampleWf_confirmTask.title=Confirm this, with a title which is different than the task name
Which -in theory- should give you the possibility of using the same task model in multiple workflows with different localization, but I guess you still have to duplicate your model in order to be able to have multiple localizations in the same workflow!
Update :
Oops! I got tricked by this statement:
This page was last modified on 13 March 2015, at 02:22.
That was a bot marking the page as obsolete!
The page is obviously outdated and it is talking about jbpm, not activiti, hopefully you still can use the same naming conventions!
Otherwise, worst case scenario, you got to create new task models that basically just extend your original task model to be able to customize the task title as needed (No need to redefine properties/constraints ...).

Processing multiline events from a text file in Dataflow

I am attempting to build a dataflow pipeline to process a text file which contains events that span multiple lines. The dataflow SDK TextIO class assumes each line is a new event.
My plan is to create a new TextReader and register it with the DataPipelineRunner. This new reader will know how to aggregate the multiple lines into a single line.
I am pretty sure that this approach will work but I am wondering if this is the right way to do it or if there is a simpler solution?
The text I am trying to parse is:
==============> len:45 pktype:4 mtype:2
SYMBOL: USOCSTIA151632.00
OPEN_INT: 212
PR_OPEN_INTEREST: 212
TIME_STAMP: 04/10/2015 06:30:17:420 val:1428661817
The result should be the last 4 lines concatenated together and the first line dropped.
Best regards,
Peter
Note that TextReader is an internal implementation detail class, so subclassing it would be highly discouraged and challenging to do properly.
The recommended way to define a new file-based format like yours is to subclass FileBasedSource using the user-defined source API.
In your case, I would recommend to base your class on the LineIO example from documentation, and wrap the LineReader defined there into your own class which would use LineReader as a helper for reading individual lines, but:
In startReading() it would skip until the line starting with "====>"
In readNextRecord() it would read lines until the next "====>" and bundle them into a single record.
Please make sure to carefully read the documentation to FileBasedSource and FileBasedReader: the parallelization mechanism relies on the consistency properties described there, which your format has to satisfy, for ensuring that records are not duplicated or omitted on the boundaries between adjacent processing shards. XmlSource tests are a good example of how to unit-test these properties.
Please tell us how it goes and report back with any problems or questions - we are very interested in feedback on this API.

How to Create Multi-Valued List in Team Foundation Server/Microsoft Test Manager?

I am working on VSTS 2010 and using Microsoft Test Manager to manage our team's test cases. I just want to add a custom field for Test Categories (with values like BVT, FVT, Regression etc..) to the existing Test case template by following the steps here and here. But I am not able to select multiple values. I am able to create a drop down list with one of these values can be selected but not more than one. but since a test case can be part of more than one test category, how can I make this possible?
My steps:
In Process Editor, Work item types I Create a new field called Test Category with Ref Name as MyCustomField
In Rules, I selected AllowedValues as BVT, FVT, Regression.
After that, In Layout tab created a new control with name Test Categories
4.The field name of the control is set to MyCustomField
When I checked in MTM to create a new test case, the control is showing as drop down instead of one that was shown here
The control you are trying to use is strictly 1 to 1 (does not allow for multiple values), so there isn't really a simple answer to this question, but here are some options:
1- Create 3 custom fields BVT, FVT, and Regression each with a yes/no input.
2- Follow the steps here: Multi-value list control
3- Upgrade to VS2012 and use tagging.

Examples of getting it wrong first, on purpose

I just caught myself doing something I do a lot, and wanted to generalize it, express it, share it and see who else is following this general practice, to find some other example situations where it might be relevant.
The general practice is getting something wrong first, on purpose, to establish that everything else is right before undertaking the current task.
What I was trying to do, specifically, was to find examples in our code base where the dojo TextArea widget was used. I knew (because I had it in front of me - existence proof) that the TextBox widget was present in at least one file. So I looked first for what I knew was there:
grep -r digit.form.TextBox | grep -v
svn
This wasn't right - I had made a common (for me) mistake of leaving off the star, so I fixed that:
grep -r digit.form.TextBox * | grep
-v svn
which found no results! Quick comparison with the file I was looking at showed me I had misspelled "dijit":
grep -r dijit.form.TextBox * | grep
-v svn
And now I got results. Cool; doing it wrong first on purpose meant my query was correct except for looking for the wrong thing, so now I could construct the right query:
grep -r dijit.form.TextArea * | grep
-v svn
and be confident that when it gave me no results, it was because there are no such files, and not because I had malformed the query.
I'll add three other examples as answers; please add any others you're aware of.
TDD
The red-green-refactor cycle of test-driven development may be the archetype of this practice. With red, demonstrate that the functionality doesn't exist; then make it exist and demonstrate that you've done so by witnessing the green bar.
http://support.microsoft.com/kb/275085
This VBA routine turns off the "subdatasheets" property for every table in your MS Access database. The user is instructed to make sure error-handling is set to "Break only on unhandled errors." The routine identifies tables needing the fix by the error that is thrown. I'm not sure this precisely fits your question, but it's always interesting to me that the error is being used in a non-error way.
Here's an example from VBA:
I also use camel case when I Dim my variables. ThisIsAnExampleOfCamelCase. As soon as I exit the VBA code line if Access doesn't change the lower case variable to camel case then I know I've got a typo. [OR, Option Explicit isn't set, which is the post topic.]
I also use this trick, several times an hour at least.
arrange - assert - act - assert
I sometimes like, in my tests, to add a counter-assertion before the action to show that the action is actually responsible for producing the desired outcome demonstrated by the concluding assertion.
When in doubt of my spelling, and of my editor's spell-checking
We use many editors. Many of them highlight misspelled words as I type them - some do not. I rely on automatic spell checking, but I can't always remember whether the editor of the moment has that feature. So I'll enter, say, "circuitx" and hit space. If it highlights, I'll back up over the space and the "x" and type another space - and learn that I spelled circuit correctly - but if it doesn't, I'll copy the word and paste it into a known spell-checker to see whether I did.
I'm not sure it's the best way to act, as it does not prevent you from mispelling the final command, for example typing "TestArea" or something like that instead of "TextArea" (your finger just have to slip a little for such a mistake).
IMHO the best way is to run your "final" command, but on two sample files first : one containing the requested text, another that doesn't.
In other words, instead of running a "similar" command, run the real one, but over "similar" data.
(Not sure if this would be a good idea to try for real!)
For example, you might give the system to the users for testing and tell them the password to get started is "Apple".
You know the users are fully up and ready to test (everything is installed and connections to databases working) when they contact you and say the password doesn't work (it's actually "Orange").

Resources