How to define an object with n properties in swagger - swagger

Im using swagger with OpenApi 3 and my API returns the following object:
{
"fields": {
"username":true,
"gender":true,
"age":true
},
"pictures": {
"1234":true,
"1235":false
},
}
But not all fields are required, so this is valid too
{
'fields':{
username:true
}
}
The picture`s keys (1234 and 1235) are ids, so they are dynamic.
How can I define this kind of schema?

Related

Is it possible to move a Shape from a slide to another?

I use the Google Slides API on a NodeJS server to edit a presentation and I can't find anything in the documentation on moving an object to another slide, a Shape for example.
Answer:
You have to do this by getting the shape from the response of presentations.pages.get, removing it, and inserting it with presentations.batchUpdate.
More Information:
In order to 'move' an object from one slide to another using the API, you in fact have to make two requests: one to remove the current object, and one to insert it into the new slide.
Firstly, you will need to make a request to presentations.pages.get in order to get all PageElement objects in the page. As per the documentation, a Shape is an instance of a PageElement object which represents a shape on a slide.
The response of presentations.pages.get will be a Page resource:
{
"objectId": string,
"pageType": enum (PageType),
"pageElements": [
{
object (PageElement)
}
],
"revisionId": string,
"pageProperties": {
object (PageProperties)
},
// Union field properties can be only one of the following:
"slideProperties": {
object (SlideProperties)
},
"layoutProperties": {
object (LayoutProperties)
},
"notesProperties": {
object (NotesProperties)
},
"masterProperties": {
object (MasterProperties)
}
}
The Shape will be contained within the response['pageElements'] resource from this request and will be of the form:
{
"objectId": string,
"size": {
object (Size)
},
"transform": {
object (AffineTransform)
},
"title": string,
"description": string,
// Union field element_kind can be only one of the following:
"elementGroup": {
object (Group)
},
"shape": {
"shapeType": enum (Type),
"text": {
object (TextContent)
},
"shapeProperties": {
object (ShapeProperties)
},
"placeholder": {
object (Placeholder)
}
},
}
Once you have obtained the Shape object out of the response you get from presentations.pages.get, you will need to then create a CreateShapeRequest out of the retrieved properties:
{
"objectId": string,
"elementProperties": {
object (PageElementProperties)
},
"shapeType": enum (Type)
}
And a DeleteObjectRequest which can be used to remove the Shape on the previous slide:
{
"objectId": string
}
The DeleteObjectRequest and CreateShapeRequest can be both contained inside the same batchUpdate request. The request body should be of the form:
{
"requests": [
{
object (Request)
}
],
"writeControl": {
object (WriteControl)
}
}
The full documentation for the batchUpdate method can be seen here.
References:
Shapes | Slides API | Google Developers
REST Resource: presentations.pages | Slides API | Google Developers
Requests | Slides API | Google Developers
Method: presentations.batchUpdate | Slides API | Google Developers

How to get a sub-field of a struct type map, in the search response of YQL query in Vespa?

Sample Data:
"fields": {
"key1":0,
"key2":"no",
"Lang": {
"en": {
"firstName": "Vikrant",
"lastName":"Thakur"
},
"ch": {
"firstName": "维克兰特",
"lastName":"塔库尔"
}
}
}
Expected Response:
"fields": {
"Lang": {
"en": {
"firstName": "Vikrant",
"lastName":"Thakur"
}
}
}
I have added the following in my search-definition demo.sd:
struct lang {
field firstName type string {}
field lastName type string {}
}
field Lang type map <string, lang> {
indexing: summary
struct-field key {
indexing: summary | index | attribute
}
}
I want to write a yql query something like this (This doesn't work):
http://localhost:8080/search/?yql=select Lang.en from sources demo where key2 contains 'no';
My temporary workaround approach
I have implemented a custom searcher in MySearcher.java, through which I am able to extract the required sub-field and set a new field 'defaultLang', and remove the 'Lang' field. The response generated by the searcher:
"fields": {
"defaultLang": {
"firstName": "Vikrant",
"lastName":"Thakur"
}
}
I have written the following in MySearcher.java:
for (Hit hit: result.hits()) {
String language = "en"; //temporarily hard-coded
StructuredData Lang = (StructuredData) hit.getField("Lang");
Inspector o = Lang.inspect();
for (int j=0;j<o.entryCount();j++){
if (o.entry(j).field("key").asString("").equals(language)){
SlimeAdapter value = (SlimeAdapter) o.entry(j).field("value");
hit.setField("defaultLang",value);
break;
}
}
hit.removeField("Lang");
}
Edit-1: A more efficient way instead is to make use of the Inspectable interface and Inspector, like above (Thanks to #Jo Kristian Bergum)
But, in the above code, I am having to loop through all the languages to filter out the required one. I want to avoid this O(n) time-complexity and make use of the map structure to access it in O(1). (Because the languages may increase to 1000, and this would be done for each hit.)
All this is due to the StructuredData data type I am getting in the results. StructureData doesn't keep the Map Structure and rather gives an array of JSON like:
[{
"key": "en",
"value": {
"firstName": "Vikrant",
"lastName": "Thakur"
}
}, {
"key": "ch",
"value": {
"firstName": "维克兰特",
"lastName": "塔库尔"
}
}]
Please, suggest a better approach altogether, or any help with my current one. Both are appreciated.
The YQL sample query I guess is to illustrate what you want as that syntax is not valid. Picking a given key from the field Lang of type map can be done as you do in your searcher but deserializing into JSON and parsing the JSON is probably inefficient as StructuredData implements the Inspectable interface and you can inspect it directly without the need to go through JSON format. See https://docs.vespa.ai/documentation/reference/inspecting-structured-data.html

Combining Multiple Falcor Data Sources into Single Model

Modified the question to explain better:
I have two Falcor models from two different HttpDataSource, like below:
First model (User model):
const user_model = new falcor.Model(
{
source: new HttpDataSource('http://localhost:3000/api/userManagement')
});
user_model.get(['user', 'list'])
OUTPUT1:
{
"jsonGraph": {
"user": {
"list": {
"$type": "atom",
"value": {
"users": [...]
}
}
}
}
}
Second model (Role model):
const role_model = new falcor.Model(
{
source: new HttpDataSource('http://localhost:3000/api/roleManagement')
});
role_model.get(['role', 'list'])
OUTPUT2:
{
"jsonGraph": {
"role": {
"list": {
"$type": "atom",
"value": {
"roles": [...]
}
}
}
}
}
Is there a way to combine all these Falcor models into a single model?
The purpose is, if I try to do user_model.get(['user', 'list']) more than once it would get the data from Falcor-Model-Cache (after the first fetch from DB).
But if I try to do role_model.get(['user', 'list']), then I have to hit the DB again to get the data (inorder to store the same User list in role_model cache).
So instead if there is a way like below:
all_model = user_model + role_model
then I can do all_model.get(['user', 'list']) (or) all_model.get(['role', 'list']). So basically I would have only one combined Falcor-Model-Cache at the browser end.
Hope the question is more clear now.
You must use forkJoin
forkJoin(model1.source,model2.source).subscribe(res=>{
//in res[0] you have the response of model1.source
//in res[1] you have the response of model2.source
let data={...res[0],...res[1]}
//in data you have all the properties
}

Client-side mutation with RANGE_ADD type doesn't include edge inside request payload

I'm trying to create new object using client-side mutation described below:
import Relay from 'react-relay'
export default class CreateThemeMutation extends Relay.Mutation {
static fragments = {
admin: () => Relay.QL`fragment on Admin { id }`,
};
getMutation() {
return Relay.QL`mutation { createTheme }`
}
getFatQuery() {
return Relay.QL`
fragment on CreateThemePayload {
admin { themes }
themeEdge
}
`
}
getVariables() {
return {
name: this.props.name,
}
}
getConfigs() {
return [{
type: 'RANGE_ADD',
parentName: 'admin',
parentID: this.props.admin.id,
connectionName: 'themes',
edgeName: 'themeEdge',
rangeBehaviors: {
'': 'append',
},
}]
}
}
Root query field admin is quite similar to viewer so this shouldn't be a problem. The problem is I haven't found themeEdge (which I believe should present) within the request payload (admin { themes } is there though):
query: "mutation CreateThemeMutation($input_0:CreateThemeInput!){createTheme(input:$input_0){clientMutationId,...F3}} fragment F0 on Admin{id} fragment F1 on Admin{id,...F0} fragment F2 on Admin{_themes2gcwoM:themes(first:20,query:""){count,pageInfo{hasNextPage,hasPreviousPage,startCursor,endCursor},edges{node{id,name,createdAt},cursor}},id,...F1} fragment F3 on CreateThemePayload{admin{id,...F0,id,...F2}}"
variables: {input_0: {name: "test", clientMutationId: "0"}}
As a result outputFields.themeEdge.resolve inside the server-side mutation never get called and I see this message:
Warning: writeRelayUpdatePayload(): Expected response payload to include the newly created edge `themeEdge` and its `node` field. Did you forget to update the `RANGE_ADD` mutation config?
I've seen similar issue on github. However REQUIRED_CHILDREN isn't my case because the application has requested themes connection already. Am I missing something obvious? Should I paste more info? Thanks.
react-relay version: 0.6.1
I ran into the same issue and eventually solved it by making sure that my equivalent of themeEdge actually existed as an edge in my schema. If you grep your schema for themeEdge, does an object exist?
For reference, here's my edge definition tailored for you:
{
"name":"themeEdge",
"description":null,
"args":[],
"type":{
"kind":"NON_NULL",
"name":null,
"ofType":{
"kind":"OBJECT",
"name":"ThemeEdge",
"ofType":null
}
},
"isDeprecated":false,
"deprecationReason":null
}
and
{
"kind":"OBJECT",
"name":"ThemeEdge",
"description":"An edge in a connection.",
"fields":[{
"name":"node",
"description":"The item at the end of the edge.",
"args":[],
"type":{
"kind":"NON_NULL",
"name":null,
"ofType":{
"kind":"OBJECT",
"name":"Theme",
"ofType":null
}
},
"isDeprecated":false,
"deprecationReason":null
}
Also note that your rangeBehaviors must exactly match the query you use to retrieve your parent object. You can specify multiple queries as follows, which also shows the syntax for when your query contains multiple variables:
{
type: 'RANGE_ADD',
parentName: 'admin',
parentID: this.props.admin.id,
connectionName: 'themes',
edgeName: 'themeEdge',
rangeBehaviors: {
'': 'append',
'first(1).id($adminId)': 'append',
},
}

How to make elasticsearch add the timestamp field to every document in all indices?

Elasticsearch experts,
I have been unable to find a simple way to just tell ElasticSearch to insert the _timestamp field for all the documents that are added in all the indices (and all document types).
I see an example for specific types:
http://www.elasticsearch.org/guide/reference/mapping/timestamp-field/
and also see an example for all indices for a specific type (using _all):
http://www.elasticsearch.org/guide/reference/api/admin-indices-put-mapping/
but I am unable to find any documentation on adding it by default for all documents that get added irrespective of the index and type.
Elasticsearch used to support automatically adding timestamps to documents being indexed, but deprecated this feature in 2.0.0
From the version 5.5 documentation:
The _timestamp and _ttl fields were deprecated and are now removed. As a replacement for _timestamp, you should populate a regular date field with the current timestamp on application side.
You can do this by providing it when creating your index.
$curl -XPOST localhost:9200/test -d '{
"settings" : {
"number_of_shards" : 1
},
"mappings" : {
"_default_":{
"_timestamp" : {
"enabled" : true,
"store" : true
}
}
}
}'
That will then automatically create a _timestamp for all stuff that you put in the index.
Then after indexing something when requesting the _timestamp field it will be returned.
Adding another way to get indexing timestamp. Hope this may help someone.
Ingest pipeline can be used to add timestamp when document is indexed. Here, is a sample example:
PUT _ingest/pipeline/indexed_at
{
"description": "Adds indexed_at timestamp to documents",
"processors": [
{
"set": {
"field": "_source.indexed_at",
"value": "{{_ingest.timestamp}}"
}
}
]
}
Earlier, elastic search was using named-pipelines because of which 'pipeline' param needs to be specified in the elastic search endpoint which is used to write/index documents. (Ref: link) This was bit troublesome as you would need to make changes in endpoints on application side.
With Elastic search version >= 6.5, you can now specify a default pipeline for an index using index.default_pipeline settings. (Refer link for details)
Here is the to set default pipeline:
PUT ms-test/_settings
{
"index.default_pipeline": "indexed_at"
}
I haven't tried out yet, as didn't upgraded to ES 6.5, but above command should work.
You can make use of default index pipelines, leverage the script processor, and thus emulate the auto_now_add functionality you may know from Django and DEFAULT GETDATE() from SQL.
The process of adding a default yyyy-MM-dd HH:mm:ss date goes like this:
1. Create the pipeline and specify which indices it'll be allowed to run on:
PUT _ingest/pipeline/auto_now_add
{
"description": "Assigns the current date if not yet present and if the index name is whitelisted",
"processors": [
{
"script": {
"source": """
// skip if not whitelisted
if (![ "myindex",
"logs-index",
"..."
].contains(ctx['_index'])) { return; }
// don't overwrite if present
if (ctx['created_at'] != null) { return; }
ctx['created_at'] = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss").format(new Date());
"""
}
}
]
}
Side note: the ingest processor's Painless script context is documented here.
2. Update the default_pipeline setting in all of your indices:
PUT _all/_settings
{
"index": {
"default_pipeline": "auto_now_add"
}
}
Side note: you can restrict the target indices using the multi-target syntax:
PUT myindex,logs-2021-*/_settings?allow_no_indices=true
{
"index": {
"default_pipeline": "auto_now_add"
}
}
3. Ingest a document to one of the configured indices:
PUT myindex/_doc/1
{
"abc": "def"
}
4. Verify that the date string has been added:
GET myindex/_search
An example for ElasticSearch 6.6.2 in Python 3:
from elasticsearch import Elasticsearch
es = Elasticsearch(hosts=["localhost"])
timestamp_pipeline_setting = {
"description": "insert timestamp field for all documents",
"processors": [
{
"set": {
"field": "ingest_timestamp",
"value": "{{_ingest.timestamp}}"
}
}
]
}
es.ingest.put_pipeline("timestamp_pipeline", timestamp_pipeline_setting)
conf = {
"settings": {
"number_of_shards": 2,
"number_of_replicas": 1,
"default_pipeline": "timestamp_pipeline"
},
"mappings": {
"articles":{
"dynamic": "false",
"_source" : {"enabled" : "true" },
"properties": {
"title": {
"type": "text",
},
"content": {
"type": "text",
},
}
}
}
}
response = es.indices.create(
index="articles_index",
body=conf,
ignore=400 # ignore 400 already exists code
)
print ('\nresponse:', response)
doc = {
'title': 'automatically adding a timestamp to documents',
'content': 'prior to version 5 of Elasticsearch, documents had a metadata field called _timestamp. When enabled, this _timestamp was automatically added to every document. It would tell you the exact time a document had been indexed.',
}
res = es.index(index="articles_index", doc_type="articles", id=100001, body=doc)
print(res)
res = es.get(index="articles_index", doc_type="articles", id=100001)
print(res)
About ES 7.x, the example should work after removing the doc_type related parameters as it's not supported any more.
first create index and properties of the index , such as field and datatype and then insert the data using the rest API.
below is the way to create index with the field properties.execute the following in kibana console
`PUT /vfq-jenkins
{
"mappings": {
"properties": {
"BUILD_NUMBER": { "type" : "double"},
"BUILD_ID" : { "type" : "double" },
"JOB_NAME" : { "type" : "text" },
"JOB_STATUS" : { "type" : "keyword" },
"time" : { "type" : "date" }
}}}`
the next step is to insert the data into that index:
curl -u elastic:changeme -X POST http://elasticsearch:9200/vfq-jenkins/_doc/?pretty
-H Content-Type: application/json -d '{
"BUILD_NUMBER":"83","BUILD_ID":"83","JOB_NAME":"OMS_LOG_ANA","JOB_STATUS":"SUCCESS" ,
"time" : "2019-09-08'T'12:39:00" }'

Resources