I have several SWRL rules that use the result of each other.
Currently, I can run the rules one by one through the SWRL/SQWRL tabs.
Is there a way to run these rules in succession in such a way that I do not have to run them one by one?
Related
I'm developing an application in rails with cucumber.
The application includes a workflow that have multiple steps.
Some steps are
A user import files (3 different files),
Other user make make some checks to date that was imported,
Other user input some parameter,
Other user apply the parameter to the data that was imported,
etc.
The steps must be executed in the correct order, and I is necessary to run all the previous steps in order to execute each one, for example to apply the parameter its necessary to have the data imported and the parameters defined.
My problem is how to build cucumber scenarios/features in this situation.
I know that a scenario is not suppose to call all the previous scenario.But the only other idea that I have is to create a very long scenario performing all this steps, and that make sense because it will be a scenario with more than 2 hundred steps.
Any thought on a pragmatical way of implementing cucumber in this kind of situation?
Many Tks
It sounds as if you have to perform every thing every time.
Will every usage of your system include importing three files? Are there any cases where the user may only need to import two files? If the case is that there will always be three files imported, then you might abstract that step as
given the files are imported
Things that always have to be done may be combined into some generic setup. As the setup never changes, the details may not be necessary to mention explicit.
My experience though, is that at the beginning it is hard to separate scenarios and try to do too much in a few scenarios with many steps. If you don't see any other way, start there. Look at your scenario and see if they possible to separate into perhaps two independent scenarios. It may be possible to separate it into two scenarios that are independent. Next step would be to see if these two new scenarios are possible to divide into two smaller, independent scenarios. It happens that it is possible.
It is obviously always possible that Cucumber is not the tool you need. It is possible that you would be better off with a unit test framework.
Using protégé 4, I have created three differents ontologies: Process, tools, Raw material. Now, i want to define the relation between the concepts of these ontologies (for example: a specific task in the process ontology need the use of a specific tool). How can i do this using rules?
I have number of different manual test cases which needs to be automated with the Specflow .
There are multiple test cases multiple scenarios. SO there will be multiple feature files ?
We are following Sprint system. each sprint has 100+ test cases which are going to be automated.
Which will be the best practice for managing the test cases and scenarios using the feature files ? Theres no point in creating same set of functions everytime for different test cases.
you would manage this the same as you would manage any other code files. Make changes, if the changes conflict with others changes then merge the changes before you check in.
The best way to avoid merge issues is to try and work in different areas. Create many feature files as then multiple people can work on different features at one time and you won't have conflicts.
Communication between the testers is important in avoiding conflicts as well, and in the case of scenarios in specflow it will be important in ensuring that you use consistent step names. Also checking in often will ensure that you minimise the number of merge issues, even after each scenario has been created.
EDIT
based on your edited question in specflow all steps are global, so if feature1 has a scenario with a step Given a user 'Bob' is logged in and Feature32 also has a scenarion with the step Given a user 'Tom' is logged in then they will both share the same step source code and so the same function will be reused.
As long as you write your steps in a consistent manner (ie use the same text) then you should get excellent reuse of functions across all of your hundreds of features and scenarios.
If your system is basically crunching numbers i.e. given a set of large inputs, run a process on them, and then assert the outputs, which is the better framework for this?
By 'large inputs', I mean we need to enter data for several different, related entities.
Also, there are several outputs i.e. we don't just get one number at the end.
If you find yourself talking through different examples with people, JBehave is probably pretty good.
If you find yourself making lists of numbers and comparing inputs with outputs, Fitnesse is probably better.
However, if you find yourself talking to other devs and nobody else, use plain old JUnit. The less abstraction you have, the quicker it will be to run and the easier it will be to maintain.
I am using Ruby on Rails 3.2.2 and Cucumber with the cucumber-rails gem. I would like to know what Cucumber tags are commonly used throughout an application or at least on what criteria I should think about those so to make tags "efficient"/"useful". More, I would like to know "how"/"in which ways" I "could"/"should" use Cucumber tags.
Tags are most commonly used to select or exclude certain tests from running. Your particular situation will dictate what 'groups' of tests are useful to run or not run for a particular test run, but some common examples could be:
#slow - indicates a test that takes a long time to run, you might want to exclude this from most test runs and only run it on an overnight build so that developers don't have to wait for it every time.
#wip - indicates that this test exercises unfinished functionality, so it would be expected to fail while the feature is in development (and when it's done, the #wip tag would be removed). This has special significance in Cucumber as it will return a non-zero exit code if any #wip tests actually pass
#release_x, #sprint_y, #version_z etc. Many teams tag each test with information about which release/sprint/version contains it, so that they can run a minimal suite of tests during development. Generally the same as the #wip tag except that they stay attached to the test so they always know when a particular feature was introduced.
#payments, #search, #seo etc. Basically any logical grouping of tests that isn't already expressed by the organisation of your feature files. Commonly used when a test relates to a cross-cutting concern, or when your project is divided into components along different lines to your feature files.
Tags are also used to fire hooks - bits of code which can run before, after, or 'around' tests with particular tags. Some examples of this are:
#javascript - indicates that a test needs Javascript support, so the hook can switch from an HTTP-only driver to one with JS support. Capybara goes one step further by automatically switching to a driver named after the tag, if one is found (so you could use e.g. #desktop, #mobile, #tablet drivers)
#logged_in - indicates that the test needs to run in the context of a logged-in user, this sometimes makes sense to express with a tag, although a Background section would be more commonly used
Additionally, tags can be used just for informational purposes. I've seen teams tag tests with the related issue number, author, developer, amongst other things, many of which can be useful (but many of which duplicate information which is easily found in source control, so I'd caution against that).