I am writing a feature that will have scenarios with a parameter in common.
The step would be something like this:
Given the user is viewing the book <bookIdAdress>
When ...
Then ...
Examples:
| bookIdAddress |
| ... |
| ... |
I will have many scenarios like the above in my feature. And I want to test this feature with many books.
This same parameter would repeat for all scenarios of a feature. As far as my current knowledge of BDD is concerned, the only way is to keep putting the same examples in every single scenario. I was wondering if there was an option to have the Examples written once for the entire feature, or if I am completely wrong in doing it this way, what approach should I take?
I know I can use the Background tab to write a set up for the entire feature, but I don't know an option to just put the examples in a feature context only.
You can not share example tables in SpecFlow. I tried by adding the table to the background as a way to hack it but it didn't work.
One option to consider is telling each Scenario to get data from the same excel file. Then you can share a data source as well as hide long tables of data.
http://www.specflow.org/plus/Excel/
Related
I have a list of IPs that I want to filter out of many queries that I have in sumo logic. Is there a way to store that list of IPs somewhere so it can be referenced, instead of copy pasting it in every query?
For example, in a perfect world it would be nice to define a list of things like:
things=foo,bar,baz
And then in another query reference it:
where mything IN things
Right now I'm just copying/pasting. I think there may be a way to do this by setting up a custom data source and putting the IPs in there, but that seems like a very round-about way of doing it, and wouldn't help to re-use parts of a query that aren't data (eg re-use statements). Also their template feature is about parameterizing a query, not re-use across many queries.
Yes. There's a notion of Lookup Tables in Sumo Logic. Consult:
https://help.sumologic.com/docs/search/lookup-tables/create-lookup-table/
for details.
It allows to store some values (either manually once, or in a scheduled way as as a result of some query) with | save operator.
And then you can refer to these values using | lookup which is conceptually similar to SQL's JOIN.
Disclaimer: I am currently employed by Sumo Logic.
How can I hit multiple apis like example.com/1000/getUser, example.com/1001/getUser in Gatling? They are Get calls.
Note: Numbers start from a non zero integer.
Hard to give good advice based on the small amount of information in your question, but I'm guessing that passing the userID's in with a feeder could be a simple, straightforward solution. Largely depends on how your API-works, what kind of tests you're planning, and how many users (I'm assuming the numbers are userId's) you need to test with.
If you need millions of users, a custom feeder that generates increments would probably be better, but beyond that the strategy would otherwise be the same. I advice you to read up on the feeder-documentation for more information both on usage in general, and how to make custom feeders: https://gatling.io/docs/3.0/session/feeder/
As an example, if you just need a relatively small amount of users, something along these lines could be a simple, straightforward solution:
Make a simple csv file (for example named userid.csv) with all your userID's and add it to the resources folder:
userid
1000
1001
1002
...
...
The .feed() step adds one value from the csv-file to your gatling user session, which you can fetch as you would work with session values ordinarily. Each of the ten users injected in this example will get an increment from the csv-file.
setUp(
scenario("ScenarioName")
.feed(csv("userid.csv"))
.exec{http("Name of your request").get("/${userid}/getUser")}
)
.inject(
atOnceUsers(10)
)
).protocols(http.baseUrl("example.com"))
What is the best practice for verifying a REST response that contains multiple relevant fields in cucmber/Gherkin? We are using scenario outlines so things are parameterized with examples tables.
Here are some approaches I've considered:
Simplest approach would be to just add each field as a column in the examples table. But this quickly became very unreadable as the examples table overflowed the width of the screen and we ended up with a almost a dozen steps in each scenario of the form: And the <fieldName> should be <value>. This is very verbose and obviously departs from the spirit of Gherkin being intended to resemble natural language.
Next I considered putting the response body as a whole into a file in JSON format and verifying it in a single step like And the response matches <file containing expected response>(examples table would just contain the path to the file). However, this makes it very opaque exactly what I am verifying in the test as the actual fields and data values are hidden away in another file. Furthemore, I've read that test steps should not be concerned with the exact format of the data (JSON, XML, or whatever).
After that I read this article uses a vertical table after the step to specify multiple fields. Resulting in something like this:
And the response contains:
| field1 | value1 |
| field2 | value2 |
However, I was unsure of how to parameterize this. Individual atomic values go into a column in the examples table but what about a whole other table? I looked into whether nested tables are supported but it seems like some people believe that to be unreadable and bad practice as well.
So, what is the general best practice for this scenario? For a parameterized scenario, what approach strikes the best balance between natural-language readability and precisely conveying your expectations?
Currently we are using graphql/graphql-ruby library. I have wrote few queries and mutations as per our requirement.
I have the below use case, where i am not sure how to implement it,
I have already an query/endpoint named allManagers which return all manager details.
Today i have got requirement to implement another query to return all the managers based on the region filter.
I have 2 options to handle this scenario.
Create an optional argument for region , and inside the query i need to check if the region is passed then filter based on region.
Use something like https://www.howtographql.com/graphql-ruby/7-filtering/ .
Which approach is the correct one ?
Looks like you can accomplish this with either approach. #2 looks a bit more complicated, but maybe is more extensible if you end up adding a ton of different types of filters?
are you going to be asked to select multiple regions? or negative regions (any region except north america?) - those are the types of questions you want to be thinking about when choosing an approach.
Sounds like a good conversation to have with a coworker :)
I'd probably opt to start with a simple approach and then change it out for a more complex one when the simple solution isn't solving all of my needs any more.
I'm a newbie to ChicagoBoss and Erlang in general, so please bear with me.
I have a model of Options which represent a number of site configurations (think of the available options in WordPress, since it's modeled after it), to which I have to perform CRUD operations on.
The model looks like this:
-module(options,
[
Id,
KeyName::string(),
Value::string(),
IsActive::string()
]
).
-compile(export_all).
Each option is prefixed by its category, so general options names look like "general_option_" followed by its specific name.
The views for Options are mostly a list of inputs with each input linked to a specific option, as you might expect.
Since the number and name of options is not known beforehand (except in the view), I would like to know what approaches there are for dealing with this case, as every example I've seen so far deals with a single item, and not a list of them. Please share any advice or constructive criticism you have, as it will be very welcome.