openapi, list of strings as query parameter - swagger

I'm defining a query parameter, with openapi 3.0.1, as follows
{
"name" : "sort",
"in" : "query",
"description" : "Sorting criteria. Example: productCode,desc",
"required" : false,
"explode" : false,
"schema" : {
"type" : "array",
"items" : {
"type" : "string"
}
}
}
On swagger-ui 3.51.1 if I add two strings
"parameter1,asc"
"parameter2,desc"
they are serialized correctly (as a list of strings with 2 elements), but if I add only one string
"parameter1,asc"
it will get serialized incorrectly as a list of strings with 2 elements (parameter1 and asc).
I do not understand why the string is exploded! Any help is greatly appreciated.

In your example, the query parameter has no style defined, so it defaults to style: form. Non-exploded form style treats the comma , as a separator of array items. This results in ambiguity because the values of your array items also use commas as an inner separator.
Possible solutions involve changing your backend code and/or the OpenAPI parameter definition.
Adjust the backend code so that it splits the received sort string on every second comma rather than every comma.
Or, use another serialization method for the sort array, for example:
explode: true to send the exploded array:
?sort=parameter1,asc&sort=parameter2,desc
style: pipeDelimited + explode: false to separate array items using | instead of commas:
?sort=parameter1,asc|parameter2,desc
Or, change sort to be an object/map instead of an array:
{
"name": "sort",
"in": "query",
"description": "Sorting criteria. Example: productCode,desc",
"required": false,
"explode": false,
"schema": {
"type": "object",
"additionalProperties": {
"type": "string",
"enum": ["asc", "desc"]
}
}
}
In this case, your current query string format
?sort=parameter1,asc,parameter2,desc
unambiguously corresponds to:
{
"parameter1": "asc",
"parameter2": "desc"
}

Related

Elasticsearch saves document as string of array, not array of strings

I am trying to contain array as a document value.
I succeed it in "tags" field as below;
This document contains array of strings.
curl -XGET localhost:9200/MY_INDEX/_doc/132328908
#=> {
"_index":"MY_INDEX",
"_type":"_doc",
"_id":"132328908",
"found":true,
"_source": {
"tags": ["food"]
}
}
However, when I am putting items in the same way as above,
the document is SOMETIMES like that;
curl -XGET localhost:9200/MY_INDEX/_doc/328098989
#=> {
"_index":"MY_INDEX",
"_type":"_doc",
"_id":"328098989",
"found":true,
"_source": {
"tags": "[\"food\"]"
}
}
This is string of array, not array of strings, which I expected.
"tags": "[\"food\"]"
It seems that this situation happens randomly and I could not predict it.
How could it happen?
Note:
・I use elasticsearch-ruby client to index a document.
This is my actual code;
es_client = Elasticsearch::Client.new url: MY_ENDPOINT
es_client.index(
index: MY_INDEX,
id: random_id, # defined elsewhere
body: {
doc: {
"tags": ["food"]
},
}
)
Thank you in advance.

Restassured: How Can we compare each element in Json array to one particular Same value in Java using Hemcrest Matchers, not using Foreach loop

Restassured: How Can we compare each element in Json array to one particular Same value in Java using Hemcrest Matchers, not using Foreach loop.
{
"id": 52352,
"name": "Great Apartments",
"floorplans": [
{
"id": 5342622,
"name": "THE STUDIO",
"fpCustomAmenities": [
{
"displaySequence": 2,
"amenityPartnerId": "gadasd",
"display": true,
"leased": true
},
{
"displaySequence": 13,
"amenityPartnerId": "sdfsfd",
"display": true,
"leased": true
}
]
},
{
"id": 4321020,
"name": "THE First Bed",
"fpCustomAmenities": [
{
"displaySequence": 4,
"amenityPartnerId": "gadasd",
"display": true,
"leased": true
},
{
"displaySequence": 15,
"amenityPartnerId": "hsfdsdf",
"display": true,
"leased": true
}
]
}
]
}
I want to compare that Leased=true for all the leased nodes at all the levels in the json response...
I have working code...
List<List<Boolean>> displayedvaluesfpStandardAmenities =
when().get(baseUrl + restUrl).
then().statusCode(200).log().ifError().
extract().body().jsonPath().getList("floorplans.fpCustomAmenities.display");
for (List<Boolean> displayedStandardList : displayedvaluesfpStandardAmenities) {
for (Boolean isDisplayedTrue : displayedStandardList) {
softAssert.assertTrue(isDisplayedTrue);
}
}
But the issue is I need the code to be in simple format using either Hemcrest Matchers or Restaussred Matchers and try simplistic way like Below, ( which is not working)
when().get(baseUrl + restUrl).
then().assertThat().body("floorplans.fpCustomAmenities.display",equalTo("true"));
The error I am getting is
java.lang.AssertionError: 1 expectation failed.
JSON path floorplans.fpCustomAmenities.display doesn't match.
Expected: true
Actual: <[[true, true], [true, true]]>
So what I need is the that all thes 'display' nodes in the json response where ever it is need to compared with "true", so that my test can Pass.
I have an alternate solution like mentioned above, but All I need is working solution using matchers.
Assuming fpCustomAmenities arrays are not empty, you can use the following solution;
when().get(baseUrl + restUrl).then()
.body("floorplans.findAll { it }.fpCustomAmenities" + // 1st line
".findAll { it }.leased.each{ a -> println a }" + // 2nd line
".grep{ it.contains(false) }.size()", equalTo(0)); // 3rd line
Here from the 1st line, we return each object in fpCustomAmenities array.
From the 2nd line we get boolean value of leased in each fpCustomAmenities object to a boolean array ([true, true]).
Each boolean array is printed from .each{ a -> println a }. I added it only to explain the answer. It is not relevant to the solution.
From 3rd line we check whether, if there is a false in each boolean array. grep() will return only the arrays which has a false. And then we get the filtered array count. Then we check whether it is equal to 0.
Check groovy documentation for more details.
Or
This solution does not use any Matchers. But this works.
String responseBody = when().get(baseUrl + restUrl).
then().extract().response().getBody().asPrettyString();
Assert.assertFalse(responseBody.contains("\"leased\": false"));

RoR - Find if part of a string matches an array of hashes

I know there are a lot of similar questions but I am struggling to find a specific answer.
i have an array of hashes with key of Symbol and a value of Price, I am looking to filter the array to only include the hashes that have a Symbol that ends in the letters ETL
Data looks like:
[ [
{
"symbol": "ABCDEF",
"price": "4"
},
{
"symbol": "GHIETL",
"price": "5"
}
]
You can use something like this:
array.select { |hash| hash[:symbol].end_with? "ETL" }
From the Ruby docs for select:
Returns an array containing all elements of enum for which the given block returns a true value.
You can also provide multiple suffixes to end_with? if you need to filter by multiple suffixes. For example:
array.select { |hash| hash[:symbol].end_with? "ETL", "DEF" }

Swagger query parameter template

I have on query parameter which is little bit complex and i have my own syntax to make that value. Its has more then one variable to make one complete string value.
Let suppose name of parameter is index which has row and column like to make this value 20:30
index = { row: 20, col:30 }
index2 = { row: 20, col:30, chr: 15 }
Now i wanted to make it as
example.com?index=20:30
example.com?index2=20:30:15
Can someone tell me how can i define this in swagger ?
Thank you.
Make your swagger parameter a string and in your code behind handle the splitting into multiple variables...
I do exactly that here:
http://turoapi.azurewebsites.net/swagger/ui/index#/Echo/Echo_Get
"parameters": [
{
"name": "location",
"in": "query",
"description": "SoFL= 26.16,-80.20",
"required": true,
"type": "string"
},
That location is (Latitude,Longitude) and I split it with a C# TypeConverter
...and the request looks like:
http://turoapi.azurewebsites.net/api/Echo?location=26.16,-80.20
The code for that WebApi is here:
https://github.com/heldersepu/TuroApi

dymamic Schemas and nested Maps in Avro

I'm new to Avro, and am trying to write some code to serialize some nested objects.
The structure of the objects looks like this:
class Parcel {
String recipe;
Map<Integer, PluginDump> dumps;
}
class PluginDump {
byte[] state;
Map<String, Param> params;
}
class Param {
Type type; //can be e.g. StringType, BooleanType, etc
Object value;
}
So I can't use a static avro schema - each PluginDump will have a different schema depending on the types within it.
I have written some code which can generate a Schema based on an individual PluginDump.
So when serializing a Parcel, how do I 'put' each PluginDump entry?
Here is my code:
Schema parcelSchema = AvroHelper.getSchema(p);
GenericRecord parcelRecord = new GenericData.Record(parcelSchema);
parcelRecord.put("recipe", p.getRecipe().toJson());
for (Map.Entry<Integer, PluginDump> entry : p.getDumps().entrySet()) {
PluginDump dump = entry.getValue();
Integer uid = entry.getKey();
Schema dumpSchema = AvroHelper.getSchema(dump);//will be different for each PluginDump
parcelRecord.put(????
Any ideas?
I have a feeling my approach is wrong, but I can't find any examples in the documentation of dynamic schema generation or nested maps.
1 When you get GenericRecord parcelRecord = new GenericData.Record(parcelSchema); you have two fields in your record: recipe and dumps, so you can't iterate through the dumps, you must put prepared map with dumps in the second field of record, just like you did it for recipe: parcelRecord.put("dumps", dumps);. But in this case, you'll get ClassCastException, because PluginDump cannot be cast to org.apache.avro.generic.IndexedRecord, so you need to put in parcelRecord a Map of GenericRecords. Also you need this for Map<String, Param> params, cause Param cannot be cast to IndexedRecord too.
2 Then, I think that its better to use Lists instead of Maps, cause avro not very good enough to work with Maps with different types of keys and values.
3 About the Param class: if you will use auto-generated schema, Param class will be presented like this.
"type": "record",
"name": "Param",
"fields": [
{
"name": "type",
"type": {
"type": "record",
"name": "Type",
"namespace": "java.lang.reflect",
"fields": []
}
},
{
"name": "value",
"type": {
"type": "record",
"name": "Object",
"namespace": "java.lang",
"fields": []
}
}
]
As far as avro uses java.lang.reflect, you will lose type field after deserialization, avro will not know what type it was.
If you want to generate avro-schema manually for each Param, considering its type, you can do something like this (I used ClassUtils.getClass from apache commons-lang3, cause standart Class.forName method doesn't always work properly):
public Schema getParamSchema() throws ClassNotFoundException {
List<Schema.Field> fields = new ArrayList<>();
fields.add(new Schema.Field("key", Schema.create(Schema.Type.STRING), "Doc: key field", (Object) null));
Schema.Field f = new Schema.Field("type", ReflectData.get().getSchema(ClassUtils.getClass(((Class) this.type).getName())), "Doc: type field", (Object) null);
f.addProp("java-class", ((Class) this.type).getName());
fields.add(f);
fields.add(new Schema.Field("value", ReflectData.get().getSchema(value.getClass()), "Doc: value field", (Object) null));
return Schema.createRecord(((Class) this.type).getName() + "Param", "Doc: param record", this.getClass().getPackage().getName(), false, fields);
}
But in this case, avro will throw ClassCastException, because it can't cast Class to Boolean, Integer etc. I always had a lot of problems working with avro and java Types and Classes.
So the best advice i think will be to change you model (Parcel, PluginDump and Param i mean) to have less problems with avro. For example you can store type name like a string, and get a Type with reflection after deserializing.

Resources