Need help on how to do this on swagger.
#SWG\Property(property="LineItems", type="array", #SWG\Items(ref="#/definitions/LineItem"))
#SWG\Definition(
definition="LineItem",
required={"Description","Quantity","UnitAmount"},
#SWG\Property(property="Description", type="string", example="Item 1"),
#SWG\Property(property="Quantity", type="integer", example=100),
#SWG\Property(property="UnitAmount", type="float", example=11)
)
#SWG\Definition(
definition="LineItem2",
required={"Description","Quantity","UnitAmount"},
#SWG\Property(property="Description", type="string", example="Item 2"),
#SWG\Property(property="Quantity", type="integer", example=10),
#SWG\Property(property="UnitAmount", type="float", example=21)
)
I want to add LineItem and LineItem2 on LineItems property, I want the output should be like this
"LineItems": [
{
"Description": "Item 1",
"Quantity": 100,
"UnitAmount": 11,
},
{
"Description": "Item 2",
"Quantity": 100,
"UnitAmount": 22,
}
]
To display array example with multiple items in Swagger UI, you need an array-level example, such as:
LineItems:
type: array
items:
$ref: '#/definitions/LineItem'
# Multi-item example
example:
- Description: Item 1
Quantity: 100
UnitAmount: 11
- Description: Item 2
Quantity: 100
UnitAmount: 22
That is, there is a single definition for array items (LineItem), and the multi-item example is defined using the example keyword on the array level.
The Swagger-PHP version of this would be:
* #SWG\Property(
* property="LineItems",
* type="array",
* #SWG\Items(ref="#/definitions/LineItem"),
* example = {
* {"Description": "Item 1", "Quantity": 100, "UnitAmount": 11},
* {"Description": "Item 2", "Quantity": 100, "UnitAmount": 22}
* }
* )
Related
I'm trying to apply masking on an input and result field that is part of an array. And the size of the array is dynamic. Based on the documentation, it is instructed to provide absolute array index which is not possible in this use case. Do we have any alternative?
Eg. If one needs to mask the age field of all the students from the input document?
Input:
"students" : [
{
"name": "Student 1",
"major": "Math",
"age": "18"
},
{
"name": "Student 2",
"major": "Science",
"age": "20"
},
{
"name": "Student 3",
"major": "Entrepreneurship",
"age": "25"
}
]
If you want to just generate a copy of input that has a field (or set of fields) removed from the input, you can use json.remove. The trick is to use a comprehension to compute the list of paths to remove. For example:
paths_to_remove := [sprintf("/students/%v/age", [x]) | some x; input.students[x]]
result := json.remove(input, paths_to_remove)
If you are trying to mask fields from the input document in the decision log using the Decision Log Masking feature then you would write something like:
package system.log
mask[x] {
some i
input.input.students[i]
x := sprintf("/input/students/%v/age", [i])
}
I am trying to get only the matched data from nested array of elastic search class. but I am not able to get it..the whole nested array data is being returned as output.
this is my Query:-
QueryBuilders.nestedQuery("questions",
QueryBuilders.boolQuery()
.must(QueryBuilders.matchQuery("questions.questionTypeId", quesTypeId)), ScoreMode.None)
.innerHit(new InnerHitBuilder());
I am using querybuilders to get data from nested class.Its working fine but not able to get only the matched data.
Request Body :
{
"questionTypeId" : "MCMC"
}
when questionTypeId = "MCMC"
this is the output i am getting..Here I want to exclude the output for which the questionTypeId = "SCMC".
output :
{
"id": "46",
"subjectId": 1,
"topicId": 1,
"subtopicId": 1,
"languageId": 1,
"difficultyId": 4,
"isConceptual": false,
"examCatId": 3,
"examId": 1,
"usedIn": 1,
"questions": [
{
"id": "46_31",
"pid": 31,
"questionId": "QID41336691",
"childId": "CID1",
"questionTypeId": "MCMC",
"instruction": "This is a single correct multiple choice question.",
"question": "Who holds the most english premier league titles?",
"solution": "Manchester United",
"status": 1000,
"questionTranslation": []
},
{
"id": "46_33",
"pid": 33,
"questionId": "QID41336677",
"childId": "CID1",
"questionTypeId": "SCMC",
"instruction": "This is a single correct multiple choice question.",
"question": "Who holds the most english premier league titles?",
"solution": "Manchester United",
"status": 1000,
"questionTranslation": []
}
]
}
As you have tagged this with spring-data-elasticsearch:
Support to return inner hits was recently added to version 4.1.M1 and so will be included in the next released version. Then in a SearchHit you will get the complete top level document, but in the innerHits property only the matching inner hits will be returned.
Here's the JSON i'm working with:
{
"featured": [
{
"name": "Featured Show number 1",
"id": "123",
"slug": "featured-show-number-one",
"description": "This is an item description for show number 1"
},
{
"name": "Featured Show number 2",
"id": "456",
"slug": "featured-show-nubmer-tow",
"description": "This is an item description for show number 2"
}
],
"nonfeatured": [
{
"name": "Show number 3",
"id": "789",
"slug": "show-number-three",
"description": "This is an item description for show number 3"
},
{
"name": "Show number 4",
"id": "135",
"slug": "show-number-four",
"description": "This is an item description for show number 4"
}
]
}
What I am trying to figure out is after I parse this JSON using two data models, one for "Featured" and one for "Nonfeatured", looping through each show and adding it to an array, I need to add the arrays of shows together to create one array containing all the shows. However, I need to keep track of which shows are featured and which ones are non featured from the single array. Is there a way to do this?
The short answer to your specific question here is 'no'. The result of adding array A and array B (where both contain the same types) is A + B; there is no metadata providing any kind of source information.
But that is not to say that you couldn't accomplish the same thing by changing the model slightly. One option would be to add an extra boolean flag to the model called isFeatured or similar. Or you could 'future-proof' any work by using an enumeration of source lists containing featured, non-featured plus anything else you may require later.
To take the first example, an option would be to add the boolean field and then call code similar to below prior to 'summing' the arrays.
arrayA.forEach { $0.isFeatured = true }
arrayB.forEach { $0.isFeatured = false }
let arrayC = arrayA + arrayB
Then each element in the summed array will tell you its source list.
(This is my first stack overflow post so go easy on me, haha)
I'm using:
-OpenApi (v3)
-L5-Swagger (wrapper of swagger-php & swagger-ui)
I'm using annotations to generate an OpenAPI spec. The spec is being generated without errors from the console. However, in each property for every model there's an additional property being added once it generates.
I've tried:
1. re-writing the model,
2. rewriting the properties in different ways
One of my models & the "id" property:
/**
* Class ActionPlan
*
* #OA\Schema(
* description="Action Plans",
* title="Action Plan Schema",
* required={
* "id",
* "name",
* "organization_id",
* "assessment_period_id",
* "completed",
* "created_by",
* "updated_by"
* },
* )
*
* #OA\Property(
* property="id",
* type="integer",
* format="int32",
* description="Action Plan ID"
* )
Here's what is being generated:
"ActionPlan": {
"title": "Action Plan Schema",
"description": "Action Plans",
"required": [
"id",
"name",
"organization_id",
"assessment_period_id",
"completed",
"created_by",
"updated_by"
],
"properties": {
"id": {
"schema": "ActionPlan",
"description": "Action Plan ID",
"type": "integer",
"format": "int32"
},
What am I doing that there's a "schema" property being generated?
When I put the spec file into Swagger editor, it says that ActionPlan.properties.id should NOT have additional properties. Additional property: schema.
I'm just wondering what's happening to make create "schema" property.
Thanks in advance!
This "error", I learned, is actually not an error at all. It's actually a very helpful feature that I was just unaware of! When an OA\Property is created outside of it's corresponding OA\Schema object, a "schema" property is added in each property to, I imagine, create a reference so we as developers don't lose become confused as to which OA\Schema a property belongs to. To remove this "schema" property, one just needs to move all the OA\Properties inside their corresponding OA\Schema object. Like so..
/**
* Class ActionPlan
*
* #OA\Schema(
* description="Action Plans",
* title="Action Plan Schema",
* required={
* "id",
* "name",
* "organization_id",
* "assessment_period_id",
* "completed",
* "created_by",
* "updated_by"
* },
* #OA\Property(
* property="id",
* type="integer",
* format="int32",
* description="Action Plan ID"
* )
* )
*/
Below query takes long time to create temporary table, its only have "228000" distinct record.
DECLARE todate,fromdate DATETIME;
SET fromdate=DATE_SUB(UTC_TIMESTAMP(),INTERVAL 2 DAY);
SET todate=DATE_ADD(UTC_TIMESTAMP(),INTERVAL 14 DAY);
SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED;
DROP TEMPORARY TABLE IF EXISTS tempabc;
SET max_heap_table_size = 1024*1024*1024;
CREATE TEMPORARY TABLE IF NOT EXISTS tempabc
-- (index using BTREE(id))
ENGINE=MEMORY
AS
(
SELECT SQL_NO_CACHE DISTINCT id
FROM abc
WHERE StartTime BETWEEN fromdate AND todate
);
I already created index on 'startTime' coulmn, still it tooks 20 sec to create table. Kindly help me out to reduce the creation time.
More Info:-
I changed my query earlier I was using "tempabc" temporary table to get my output, now I am using IN clause instead of temporary table and now it is taking 12 sec to execute, but still more than expected time..
Earlier(taking 20-30 sec)
DECLARE todate,fromdate DATETIME;
SET fromdate=DATE_SUB(UTC_TIMESTAMP(),INTERVAL 2 DAY);
SET todate=DATE_ADD(UTC_TIMESTAMP(),INTERVAL 14 DAY);
SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED;
DROP TEMPORARY TABLE IF EXISTS tempabc;
SET max_heap_table_size = 1024*1024*1024;
CREATE TEMPORARY TABLE IF NOT EXISTS tempabc
-- (index using BTREE(id))
ENGINE=MEMORY
AS
(
SELECT SQL_NO_CACHE DISTINCT id
FROM abc
WHERE StartTime BETWEEN fromdate AND todate
);
SELECT DISTINCT p.xyzID
FROM tempabc s
JOIN xyz_tab p ON p.xyzID=s.ID AND IFNULL(IsGeneric,0)=0;
Now(taking 12-14 sec)
DECLARE todate,fromdate Timestamp;
SET fromdate=DATE_SUB(UTC_TIMESTAMP(),INTERVAL 2 DAY);
SET todate=DATE_ADD(UTC_TIMESTAMP(),INTERVAL 14 DAY);
SELECT p.xyzID FROM xyz_tab p
WHERE id IN (
SELECT DISTINCT id FROM abc
WHERE StartTime BETWEEN fromdate AND todate )
AND IFNULL(IsGeneric,0)=0 GROUP BY p.xyxID;
But we need to achieve 3-5 sec of execution time.
This is my explain output.
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: abc
partitions: NULL
type: index
possible_keys: ix_starttime_id,IDX_Start_time,IX_id_starttime,IX_id_starttime_prgsvcid
key: IX_id_starttime
key_len: 163
ref: NULL
rows: 18779876
filtered: 1.27
Extra: Using where; Using index; Using temporary; Using filesort; LooseScan
*************************** 2. row ***************************
id: 1
select_type: SIMPLE
table: p
partitions: NULL
type: eq_ref
possible_keys: PRIMARY,IX_seriesid
key: PRIMARY
key_len: 152
ref: onconnectdb.abc.ID
rows: 1
filtered: 100.00
Extra: Using where
Explain in JSON format
EXPLAIN: {
"query_block": {
"select_id": 1,
"cost_info": {
"query_cost": "10139148.44"
},
"grouping_operation": {
"using_temporary_table": true,
"using_filesort": true,
"cost_info": {
"sort_cost": "1.00"
},
"nested_loop": [
{
"table": {
"table_name": "abc",
"access_type": "index",
"possible_keys": [
"ix_starttime_tmsid",
"IDX_Start_time",
"IX_id_starttime",
"IX_id_starttime_prgsvcid"
],
"key": "IX_id_starttime",
"used_key_parts": [
"ID",
"StartTime",
"EndTime"
],
"key_length": "163",
"rows_examined_per_scan": 19280092,
"rows_produced_per_join": 264059,
"filtered": "1.37",
"using_index": true,
"loosescan": true,
"cost_info": {
"read_cost": "393472.45",
"eval_cost": "52812.00",
"prefix_cost": "446284.45",
"data_read_per_join": "2G"
},
"used_columns": [
"ID",
"StartTime"
],
"attached_condition": "(`onconnectdb`.`abc`.`StartTime` between <cache>(fromdate#1) and <cache>(todate#0))"
}
},
{
"table": {
"table_name": "p",
"access_type": "eq_ref",
"possible_keys": [
"PRIMARY",
"IX_seriesid"
],
"key": "PRIMARY",
"used_key_parts": [
"ID"
],
"key_length": "152",
"ref": [
"onconnectdb.abc.ID"
],
"rows_examined_per_scan": 1,
"rows_produced_per_join": 1,
"filtered": "100.00",
"cost_info": {
"read_cost": "9640051.00",
"eval_cost": "0.20",
"prefix_cost": "10139147.44",
"data_read_per_join": "2K"
},
"used_columns": [
"ID",
"xyzID",
"IsGeneric"
],
"attached_condition": "(ifnull(`onconnectdb`.`p`.`IsGeneric`,0) = 0)"
}
}
]
}
}
}
Please suggest.