include_root_in_json not working properly - ruby-on-rails

I've implemented as_json in the parent model as follows:
def as_json(options = {})
options[:include] = :items
super(options)
end
include_root_in_json = true is set in the configuration.
What I GET is:
[
{
"order": {
"items": [
{
"key1": "value1"
},
{
"key1":"value2"
}
],
"key1": "value1"
}
}
]
But what I WANT is this:
[
{
"order": {
"items": [
{
"item": {
"key1": "value1"
}
},
{
"item": {
"key1": "value2"
}
}
],
"key1": "value1"
}
}
]
So the root name is not included for nested associations. Is that a bug or am I missing something?

As far as i can tell "include_root_in_json" does not work for nested attributes but only on the very root like:
[
{
"videos": {
"video": [
"id": 1
]
}
}
]
For this example it would remove the "videos" root.
Tip
I found that as_json is not really good if you are building something like an API where you need to be very flexible sometimes. For that reason i am using RABL, mabye you should give it a try https://github.com/nesquena/rabl

Related

Odata filter expression to query for a particular field name in json

I have the following Odata response
{
"d": {
"results": [
{
"__metadata": {
"id": ....,
"URI": .....,
"type": .....
},
"SchemaId": "ABC"
},
{
"__metadata": {
"id": ....,
"URI": .....,
"type": .....
},
"SchemaId": "DEF"
}
]
}
I want to filter for all the schemaId. Can anyone help with filter query.
I want to filter for all the schemaId. -> No idea what you say, but if you want to filter for a SchemaId you have to do it like this:
.../EntitySet$filter=SchemaId eq 'Whateva'
https://www.odata.org/getting-started/basic-tutorial/

Elasticsearch possessive_english stemmer query returns no hits

I was able to find this other question: Using of possessive_english stemmer in Elasticsearch
but its been 3 years since there was any activity on it
I am trying to get elasticsearch to ignore ' when indexing and searching. For example:
POST my_index/_doc/
{
"message" : "Mike's bike"
}
I want to be able to search for this document using "mikes", "mike's", "mike". I looked and thought that possessive_english should accomplish this task but I have been unable to get the expected results.
I created the index with
PUT /my_index
{
"settings": {
"analysis": {
"analyzer": {
"rebuilt_standard": {
"tokenizer": "standard",
"filter": [
"lowercase", "my_stemmer"
]
}
},
"filter": {
"my_stemmer":{
"type": "stemmer",
"language": "possessive_english"
}
}
}
}
}
I tested the analyzer with
POST /my_index/_analyze
{
"analyzer": "rebuilt_standard",
"text": "Mike's bike"
}
And this is the result
{
"tokens" : [
{
"token" : "mike",
"start_offset" : 0,
"end_offset" : 6,
"type" : "<ALPHANUM>",
"position" : 0
},
{
"token" : "bike",
"start_offset" : 7,
"end_offset" : 11,
"type" : "<ALPHANUM>",
"position" : 1
}
]
}
Looks like the analyzer is working. Then I inserted the document with:
POST my_index/_doc/
{
"message" : "Mike's bike"
}
When searching for it, it returned 0 results
GET /my_index/_search
{
"query": {
"match": {"message": "mike"}
}
}
GET /my_index/_search
{
"query": {
"match": {"message": "mikes"}
}
}
but
GET /my_index/_search
{
"query": {
"match": {"message": "mike's"}
}
}
returned results
It seems like I am missing the configuration on the mapping side of things from the linked question but I am not sure how to set it.
I tested the above with kibana but I am actually using rails and gems 'elasticsearch-model', 'elasticsearch-rails', 'elasticsearch-persistence' with the repository pattern. I am also new to rails so I don't know if its my configs with rails, or elasticsearch, or both that needs work.
I'll post them just in case
include Elasticsearch::Persistence::Repository
include Elasticsearch::Persistence::Repository::DSL
client = Elasticsearch::Client.new(url: 'http://localhost:9200', log: true)
settings index: {
number_of_shards: 1,
analysis: {
analyzer: {
custom: {
type: "custom",
tokenizer: "standard",
filter: [
"lowercase",
"english_possessive_stemmer",
]
}
},
filter: {
english_possessive_stemmer: {
type: "stemmer",
language: "possessive_english",
}
}
}
}
mappings {
indexes :icon, index: false
indexes :properties, type: 'nested' do
indexes :values
end
indexes :name
}
in the controller
repository = Repository.new
repository.create_index!(force: true)
repository.save(json)
results = repository.search(query: { match: { name: 'Mikes' } })
Your analyzer is working fine. I think you have not applied it to your mapping
PUT /my_index
{
"settings": {
"analysis": {
"analyzer": {
"rebuilt_standard": {
"tokenizer": "standard",
"filter": [
"lowercase", "my_stemmer","english_stemmer"
]
}
},
"filter": {
"my_stemmer":{
"type": "stemmer",
"language": "possessive_english"
},
"english_stemmer": {
"type": "stemmer",
"language": "english"
}
}
}
},
"mappings": {
"properties": {
"message":{
"type": "text",
"analyzer": "rebuilt_standard" ---> pass the analyzer
}
}
}
}
possessive_english filter only removes "'" , you cannot use it to search for mikes (it will work for mike though). You will need to use stemmer which reduces words to their base form.
I have an excellent article here for further reference.

Elasticsearch exact match only for specific fields on multi_match fields

Im trying to search only on the following fields:
name (product name)
vendor.username
vendor.name
categories_name
But the results is to wide, I want the results to be exactly what user is typed.
Example:
I type Cloth A I want the result to be exactly Cloth A not something else contain Cloth or A
Here is my attempt:
```
GET /products/_search
{
"query": {
"filtered": {
"query": {
"multi_match": {
"query": "cloth A",
"fields": [
"name",
"vendor.name",
"vendor.username",
"categories_name"
]
}
},
"filter": [
{
"term": {
"is_available": true
}
},
{
"term": {
"is_ready": true
}
},
{
"missing": {
"field": "deleted_at"
}
}
]
}
}
}
```
How do I do that? Thanks in advance
Put this in your multi_match
"multi_match": {
"type": "best_fields"
}
This one works:
"multi_match": {
"type": "phrase"
}

Access property in JSON response

I'm looking to grab the image 'src' within this JSON response, but my trying has left me at a loose end. Any help would be brilliant.
My Model
def test
response = self.class.get("URLFORRESPONSE")
#elements = response.parsed_response["extractorData"]
#parsed = #elements.collect { |e| e['src'] }
end
JSON Response
{
"extractorData" : {
"url" : "http://testwebsite.com/",
"resourceId" : "409417ee21618b70d74b03231a793c2d7",
"data" : [ {
"group" : [ {
"image" : [ {
"src" : "test0.jpg"
} ]
}, {
"image" : [ {
"src" : "test1.jpg"
} ]
}, {
"image" : [ {
"src" : "test2.jpg"
} ]
}, {
"image" : [ {
"src" : "test3.jpg"
} ]
}, {
"image" : [ {
"src" : "test4.jpg"
} ]
}
Your JSON is invalid. It should be:
{
"extractorData": {
"url": "http://testwebsite.com/",
"resourceId": "409417ee21618b70d74b03231a793c2d7",
"data": [{
"group": [{
"image": [{
"src": "test0.jpg"
}]
}, {
"image": [{
"src": "test1.jpg"
}]
}, {
"image": [{
"src": "test2.jpg"
}]
}, {
"image": [{
"src": "test3.jpg"
}]
}, {
"image": [{
"src": "test4.jpg"
}]
}]
}]
}
}
To extract the src's:
#parsed = #elements['data'][0]['group'].map{| g | g['image'][0]['src'] }
I know this is ugly as hell but i hope this sugestion helps.
Since HTTParty.parsed_response returns a hash and assuming you're using ruby 2.3, you can do:
#elements = response.parsed_response["extractorData"]
#elements.dig('data').collect{|h| h.dig('group').collect{|h| h.dig('image').collect{|h| h.dig('src')}}}
see it:
h = {"extractorData"=>{"url"=>"http://testwebsite.com/", "resourceId"=>"409417ee21618b70d74b03231a793c2d7", "data"=>[{"group"=>[{"image"=>[{"src"=>"test0.jpg"}]}, {"image"=>[{"src"=>"test1.jpg"}]}, {"image"=>[{"src"=>"test2.jpg"}]}, {"image"=>[{"src"=>"test3.jpg"}]}, {"image"=>[{"src"=>"test4.jpg"}]}]}]}}
h.dig('extractorData', 'data').collect{|h| h.dig('group').collect{|h| h.dig('image').collect{|h| h.dig('src')}}}
=> [[["test0.jpg"], ["test1.jpg"], ["test2.jpg"], ["test3.jpg"], ["test4.jpg"]]]

Mongo query in Object of arrays

Document example
{
"_id": 1,
"test": {
"item_obj": {
"item1": ["a", "b"],
"item2": ["c"],
"item3": ["a", "d"]
}
}
}
I want to fetch documents where "a" exists in test.item_obj. "a" may exist in any array. And we don't know the keys present inside item_obj (No idea item1, item2 or item3 exists).
Need rails-mongo query.
If this is your search case, then whatever way you look at it you need the JavaScript evaluation of the $where clause to resolve your current structure. In the shell example ( since you need to use the JavaScript expression anyway ):
db.collection.find(function() {
var root = this.test.item_obj;
return Object.keys(root).some(function(key) {
return root[key] == "a";
});
})
Or for mongoid that is something like:
func = <<-eof
var root = this.test.item_obj;
return Object.keys(root).some(function(key) {
return root[key] == "a";
});
eof
Model.for_js(func)
However, if you simply change your structure to define "items_objects" as an array as follows:
{
"_id": 1,
"test": {
"item_objects": [
{ "name": "item1", "data": ["a","b"] },
{ "name": "item2", "data": ["c"] },
{ "name": "item3", "data": ["a","d"] }
}
}
}
Then asking for what you want here is as basic as:
db.collection.find({
"test.item_objects.data": "a"
})
Or for mongoid:
Model.where( "test.item_objects.data" => "a" )
Nested arrays are not really a great idea though, so perhaps live with:
{
"_id": 1,
"test": {
"item_objects": [
{ "name": "item1", "data": "a" },
{ "name": "item1", "data": "b" },
{ "name": "item2", "data": "c" },
{ "name": "item3", "data": "a" },
{ "name": "item3", "data": "d" }
}
}
}
Which is basically the same thing, but a but more long winded. But ultimately much more easy to deal with in atomic updates. And of course the query to find the values in the document is exactly the same.

Resources