I'm looking to grab the image 'src' within this JSON response, but my trying has left me at a loose end. Any help would be brilliant.
My Model
def test
response = self.class.get("URLFORRESPONSE")
#elements = response.parsed_response["extractorData"]
#parsed = #elements.collect { |e| e['src'] }
end
JSON Response
{
"extractorData" : {
"url" : "http://testwebsite.com/",
"resourceId" : "409417ee21618b70d74b03231a793c2d7",
"data" : [ {
"group" : [ {
"image" : [ {
"src" : "test0.jpg"
} ]
}, {
"image" : [ {
"src" : "test1.jpg"
} ]
}, {
"image" : [ {
"src" : "test2.jpg"
} ]
}, {
"image" : [ {
"src" : "test3.jpg"
} ]
}, {
"image" : [ {
"src" : "test4.jpg"
} ]
}
Your JSON is invalid. It should be:
{
"extractorData": {
"url": "http://testwebsite.com/",
"resourceId": "409417ee21618b70d74b03231a793c2d7",
"data": [{
"group": [{
"image": [{
"src": "test0.jpg"
}]
}, {
"image": [{
"src": "test1.jpg"
}]
}, {
"image": [{
"src": "test2.jpg"
}]
}, {
"image": [{
"src": "test3.jpg"
}]
}, {
"image": [{
"src": "test4.jpg"
}]
}]
}]
}
}
To extract the src's:
#parsed = #elements['data'][0]['group'].map{| g | g['image'][0]['src'] }
I know this is ugly as hell but i hope this sugestion helps.
Since HTTParty.parsed_response returns a hash and assuming you're using ruby 2.3, you can do:
#elements = response.parsed_response["extractorData"]
#elements.dig('data').collect{|h| h.dig('group').collect{|h| h.dig('image').collect{|h| h.dig('src')}}}
see it:
h = {"extractorData"=>{"url"=>"http://testwebsite.com/", "resourceId"=>"409417ee21618b70d74b03231a793c2d7", "data"=>[{"group"=>[{"image"=>[{"src"=>"test0.jpg"}]}, {"image"=>[{"src"=>"test1.jpg"}]}, {"image"=>[{"src"=>"test2.jpg"}]}, {"image"=>[{"src"=>"test3.jpg"}]}, {"image"=>[{"src"=>"test4.jpg"}]}]}]}}
h.dig('extractorData', 'data').collect{|h| h.dig('group').collect{|h| h.dig('image').collect{|h| h.dig('src')}}}
=> [[["test0.jpg"], ["test1.jpg"], ["test2.jpg"], ["test3.jpg"], ["test4.jpg"]]]
Related
I'm new to API Connect, and I haven't been able to find the correct mapping to pass from an array of objects to an object, evaluating its content.
I explain:
I have as input a json like this:
{
"methodCall": {
"methodName": {
"$": "ThisIsTheMethodName"
},
"params": {
"param": {
"value": {
"array": {
"data": {
"value": {
"struct": {
"member": [
{
"name": {
"$": "message"
},
"value": {
"string": {
"$": "Some text to send to client"
}
}
},
{
"name": {
"$": "phone"
},
"value": {
"string": {
"$": "9876543120124"
}
}
},
{
"name": {
"$": "date"
},
"value": {
"string": {}
}
},
{
"name": {
"$": "appid"
},
"value": {
"string": {
"$": "Application Identificator"
}
}
},
{
"name": {
"$": "costCenter"
},
"value": {
"string": {
"$": "102030"
}
}
},
{
"name": {
"$": "filled"
},
"value": {
"string": {
"$": "filledString"
}
}
}
]
}
}
}
}
}
}
}
}
}
and I need to generate this json output from the mapping:
{
"phoneNumberSMS":"983849780",
"message":"Some text to send to client",
"date": "2022-10-04T15:30:00",
"appId":"Application Identificator",
"costCenter":"102030",
"filled":"filledString" }
I have tried with the following configuration, but without success:
On the YAML
actions:
- set: output.phoneNumberSMS
foreach: input.methodCall.params.param.value.array.data.value.struct.member.value.string
from:
- input.methodCall.params.param.value.array.data.value.struct.member.name.$
- input.methodCall.params.param.value.array.data.value.struct.member.value.string.$
values: |-
var retValue1 = '';
if($(input.methodCall.params.param.value.array.data.value.struct.member.name.$) == 'phone'){
retValue1=input.methodCall.params.param.value.array.data.value.struct.member.value.string.$;
}
retValue1;
I appreciate your help !!
I solve this in two phases of mapping:
Create an array called members, where each node is of type member, which has name and value properties.
This 'members' array is the receiver of the data coming from the request.
In the second phase of the mapping, I took the output variable from the previous mapping (of type members) and assigned it to message.body.
This with the aim of getting rid of the field names with a dollar symbol ($), so the mapping will not give any error for not recognizing it.
I'm searching for list items using the /search/query endpoint of MS Graph. I want to use aggregations and spell checking. This is my request
{
"requests": [
{
"entityTypes": [
"listItem"
],
"query": {
"queryString": "inspring"
},
"fields": [
"title"
],
"aggregations": [
{
"field": "fileType",
"size": 20,
"bucketDefinition": {
"sortBy": "count",
"isDescending": "true",
"minimumCount": 0
}
}
],
"queryAlterationOptions": {
"enableModification": true
}
}
]
}
It returns no results, since the search term was not spell checked and modified:
{
"value": [
{
"searchTerms": [
"inspring"
],
"hitsContainers": [
{
"total": 0,
"moreResultsAvailable": false
}
]
}
],
"#odata.context": "https://graph.microsoft.com/beta/$metadata#Collection(microsoft.graph.searchResponse)"
}
However, when I remove the aggregations and use the following request, it works:
{
"requests": [
{
"entityTypes": [
"listItem"
],
"query": {
"queryString": "inspring"
},
"fields": [
"title"
],
"queryAlterationOptions": {
"enableModification": true
}
}
]
}
Response:
{
{
"value": [
{
"searchTerms": [
"inspiring"
],
"hitsContainers": [
{
"hits": [...],
"total": 64,
"moreResultsAvailable": true
}
],
"queryAlterationResponse": {
"originalQueryString": "inspring",
"queryAlteration": {
"alteredQueryString": "inspiring",
"alteredHighlightedQueryString": "inspiring",
"alteredQueryTokens": [
{
"offset": 0,
"length": 8,
"suggestion": "inspiring"
}
]
},
"queryAlterationType": "modification"
}
}
],
"#odata.context": "https://graph.microsoft.com/v1.0/$metadata#Collection(microsoft.graph.searchResponse)"
}
How do I have to change my request to make query alterations work with aggregations?
I have a products catalogue where every product is indexed as follows (queried from http://localhost:9200/products/_doc/1) as sample:
{
"_index": "products_20201202145032789",
"_type": "_doc",
"_id": "1",
"_version": 1,
"_seq_no": 0,
"_primary_term": 1,
"found": true,
"_source": {
"title": "Roncato Eglo",
"description": "Amazing LED light made of wood and description continues.",
"price": 3990,
"manufacturer": "Eglo",
"category": [
"Lights",
"Indoor lights"
],
"options": [
{
"title": "Mount type",
"value": "E27"
},
{
"title": "Number of bulps",
"value": "4"
},
{
"title": "Batteries included",
"value": "true"
},
{
"title": "Ligt temperature",
"value": "warm"
},
{
"title": "Material",
"value": "wood"
},
{
"title": "Voltage",
"value": "230"
}
]
}
}
Every option contains different value, so there are many Mount type values, Light temperature values, Material values, and so on.
How can I create an aggregation (filter) where I can let customers choose between various Mount Type options:
[ ] E27
[X] E14
[X] GU10
...
Or let them choose from different Material options displayed as checkboxes:
[X] Wood
[ ] Metal
[ ] Glass
...
I can handle it on frontend once the buckets are created. Creation of different buckets for these options is What I am struggling with.
I have succesfully created and displayed and using aggregations for Category, Manufacturer and other basic ones. Thes product options are stored in has_many_through relationships in database. I am using Rails + searchkick gem, but those allow me to create raw queries to elastic search.
The prerequisite for such aggregation is to have options field as nested.
Sample index mapping:
PUT test
{
"mappings": {
"properties": {
"title": {
"type": "keyword"
},
"options": {
"type": "nested",
"properties": {
"title": {
"type": "keyword"
},
"value": {
"type": "keyword"
}
}
}
}
}
}
Sample docs:
PUT test/_doc/1
{
"title": "Roncato Eglo",
"options": [
{
"title": "Mount type",
"value": "E27"
},
{
"title": "Material",
"value": "wood"
}
]
}
PUT test/_doc/2
{
"title": "Eglo",
"options": [
{
"title": "Mount type",
"value": "E27"
},
{
"title": "Material",
"value": "metal"
}
]
}
Assumption: For a given document a title under option appears only once. For e.g. there can exists only one nested document under option having title as Material.
Query for aggregation:
GET test/_search
{
"size": 0,
"aggs": {
"OPTION": {
"nested": {
"path": "options"
},
"aggs": {
"TITLE": {
"terms": {
"field": "options.title",
"size": 10
},
"aggs": {
"VALUES": {
"terms": {
"field": "options.value",
"size": 10
}
}
}
}
}
}
}
}
Response:
{
"took" : 2,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 2,
"relation" : "eq"
},
"max_score" : null,
"hits" : [ ]
},
"aggregations" : {
"OPTION" : {
"doc_count" : 4,
"TITLE" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : "Material",
"doc_count" : 2,
"VALUES" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : "metal",
"doc_count" : 1
},
{
"key" : "wood",
"doc_count" : 1
}
]
}
},
{
"key" : "Mount type",
"doc_count" : 2,
"VALUES" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : "E27",
"doc_count" : 2
}
]
}
}
]
}
}
}
}
Currently trying to call an API and map an attribute to my model. e.g. image_src string from the JSON response to image string in my model. But right now It's getting the error 'no implicit conversion of String into Integer'.
Feed.rb
require 'httparty'
require 'json'
class Feed < ActiveRecord::Base
include HTTParty
base_uri 'https://extraction.import.io/query/runtime'
has_many :entries
# GET /feeds
# GET /feeds.json
def fetch_data
response = self.class.get("/2365205f-8502-439e-a6d2-73988cfa03f1?&url=http%3A%2F%2F%2F")
#elements = response.parsed_response["extractorData"]
#elements.map do |image_info|
self.entries.create(image: image_info['url'])
end
end
end
Entry.rb
class Entry < ActiveRecord::Base
belongs_to :feed
end
HTML
<% #feed.entries.each do |image| %>
<div class="grid-item">
<%= image_tag(image) %>
</div>
<% end %>
JSON Response
{
"extractorData": {
"url": "http://linxspiration.com/",
"resourceId": "e26012fd5f25602c1c4e0945a7507e1f",
"data": [
{
"group": [
{
"image": [
{
"src": "http://40.media.tumblr.com/0a38dd25a41e0702940c084b60bee860/tumblr_o5c0tyGhOP1qkegsbo1_1280.jpg",
"href": "http://linxspiration.com/post/142509606341"
}
]
},
{
"image": [
{
"src": "http://36.media.tumblr.com/276def9e46bdfb9efee7f7d4e4444195/tumblr_o5c0szx4F21qkegsbo1_1280.jpg",
"href": "http://linxspiration.com/post/142506402604"
}
]
},
{
"image": [
{
"src": "http://40.media.tumblr.com/4953cdecc24389d94844dfb88c819d8c/tumblr_o055uh8b7h1uhpqwfo1_1280.jpg",
"href": "http://linxspiration.com/post/142503176501/linxsupply-discipline-gets-shit-done-buy-this"
}
]
},
{
"image": [
{
"src": "http://41.media.tumblr.com/353f10283fc3a0237262629b6a395c90/tumblr_o5aadrw6l31qkegsbo1_1280.jpg",
"href": "http://linxspiration.com/post/142499072059"
}
]
},
{
"image": [
{
"src": "http://40.media.tumblr.com/889c65a662a1b690f299593e3581b947/tumblr_o57uysuSjF1tq9q5vo1_1280.jpg",
"href": "http://linxspiration.com/post/142493659142/blazepress-sunrise-in-venice"
}
]
},
{
"image": [
{
"src": "http://45.media.tumblr.com/14c24e549a6559b48933f05ff40e3627/tumblr_o57vmsJ7gk1tq9q5vo1_400.gif",
"href": "http://linxspiration.com/post/142488060049/blazepress-i-lick-paw"
}
]
},
{
"image": [
{
"src": "http://36.media.tumblr.com/f184f397d14563c9e41136c5fe370016/tumblr_o59oo0pUy61qkegsbo1_1280.jpg",
"href": "http://linxspiration.com/post/142476686818"
}
]
},
{
"image": [
{
"src": "http://40.media.tumblr.com/453b70fd4055952e907766a5942cc560/tumblr_o59ohsGHBo1qkegsbo1_1280.jpg",
"href": "http://linxspiration.com/post/142470776914"
}
]
},
{
"image": [
{
"src": "http://41.media.tumblr.com/1de6c873de55ddb899f83441454ff5bb/tumblr_o59ohhnd0k1qkegsbo1_1280.jpg",
"href": "http://linxspiration.com/post/142465333421"
}
]
},
{
"image": [
{
"src": "http://40.media.tumblr.com/f71b3ee53f51a9679dc65096933f2b08/tumblr_o59of8kouq1qkegsbo1_1280.jpg",
"href": "http://linxspiration.com/post/142456009994"
}
]
},
{
"image": [
{
"src": "http://40.media.tumblr.com/b6aa0dc78619a6b9e09b232224c0bfb7/tumblr_o59oeu18Ly1qkegsbo1_1280.jpg",
"href": "http://linxspiration.com/post/142452801623"
}
]
},
{
"image": [
{
"src": "http://41.media.tumblr.com/d1c5a23af31880d10fd89fc8a6a0b8e6/tumblr_o585z3mPuF1tq9q5vo1_1280.jpg",
"href": "http://linxspiration.com/post/142449893982/blazepress-life"
}
]
},
{
"image": [
{
"src": "http://41.media.tumblr.com/03369de74399e12e1901b3751917c512/tumblr_o54gbfJlXx1qkegsbo1_1280.jpg",
"href": "http://linxspiration.com/post/142445969058"
}
]
},
{
"image": [
{
"src": "http://40.media.tumblr.com/6543cbb31ea206a59cbdd1e865d63562/tumblr_o54mncUOEP1qkegsbo1_1280.jpg",
"href": "http://linxspiration.com/post/142440337822"
}
]
}
]
}
]
},
"pageData": {
"statusCode": 200,
"timestamp": 1460206655245
}
}
Any help would be brilliant!
response = self.class.get("/2365205f-8502-439e-a6d2-73988cfa03f1?&url=http%3A%2F%2F%2F")
puts response.parsed_response
Shows that the auth is failing:
<html>
<head><title>401 Authorization Required</title></head>
<body bgcolor="white">
<center><h1>401 Authorization Required</h1></center>
<hr><center>openresty/1.9.7.3</center>
</body>
</html>
def fetch_data
...
#elements = response.parsed_response["extractorData"]
# To access the image src's:
image_srcs = #elements['data'].first['group'].map{ | z | z['image'].first['src']}
image_srcs.each do |src|
self.entries.create(image: src)
end
end
I am a novice with elastic search and while writing script_score I am facing parse exception saying 'expected field name but got [START_ARRAY]'
Here is the mapping:
PUT toadb
{
"mappings":{
"keywords":{
"properties":{
"Name":{"type":"string","analyzer": "simple"},
"Type":{"type":"string","index": "not_analyzed"},
"Id":{"type":"string","index": "not_analyzed"},
"Boosting Field":{"type" : "integer", "store" : "yes"}
}
},
"businesses":{
"properties": {
"Name":{"type":"string","analyzer": "simple"},
"Type":{"type":"string","index": "not_analyzed"},
"Id":{"type":"string","index": "not_analyzed"},
"Business_seq":{"type":"string","index": "not_analyzed"},
"Status":{"type":"string","index": "not_analyzed"},
"System_rating":{"type" : "integer", "store" : "yes"},
"System_rating_weight":{"type" : "integer", "store" : "yes"},
"Position":{ "type":"geo_point","lat_lon": true},
"Display Pic":{"type": "string","index": "not_analyzed"},
"Boosting Field":{"type" : "integer", "store" : "yes"}
}
}
}
}
Here is the query I am trying to execute:
GET /toadb/_search
{
"query":{
"function_score" : {
"query" : {
"multi_match" : {
"query": "Restaurant",
"fields": [ "Name"],"fuzziness":1
}},
"script_score":
{
"script":"if(doc['Status'] && doc['Status']=='A'){ _score+ (doc['Boosting Field'].value);}"
}
},
"size":10
}
}
Please provide sample examples if any (Already referred to elasticsearch documentation)
It looks like you have mistakenly placed the size option in your query. In your example, you have added it as a field next to the function_score query. Instead, it belongs as a sibling to the root query object.
Try this:
GET /toadb/_search
{
"query": {
"function_score": {
"query": {
"multi_match": {
"query": "Restaurant",
"fields": [
"Name"
],
"fuzziness": 1
}
},
"script_score": {
"script": "if(doc['Status'] && doc['Status']=='A'){ _score+ (doc['Boosting Field'].value);}"
}
}
},
"size": 10
}
Have a look at the documentation for the request body search.