I have an STI model which I want to be searchable with ElasticSearch and Tire. The issue I am having is when Tire creates the mappings it seems to ignore my custom analyzers for the second model. Below is an example of my models.
class Account < ActiveRecord::Base
attr_accessible :name, :type
include Tire::Model::Search
include Tire::Model::Callbacks
tire.settings :analysis => {
:analyzer => {
"custom_search_analyzer" => {
"tokenizer" => "keyword",
"filter" => "lowercase"
},
"custom_index_analyzer" => {
"tokenizer" => "keyword",
"filter" => ["lowercase","substring"]
}
},
:filter => {
:substring => {
"type" => "nGram",
"min_gram" => 1,
"max_gram" => 20
}
}
} do
mapping do
indexes :id, :type => 'integer', :include_in_all => false
indexes :name, :type => 'string', :search_analyzer => :custom_search_analyzer, :index_analyzer=>:custom_index_analyzer
end
end
def to_indexed_json
hash = {}
hash[:id] = id
hash[:name] = name
hash.to_json
end
end
class StandardAccount < Account
tire.index_name 'accounts'
end
class SuperAccount < Account
tire.index_name 'accounts'
end
When I create the index through tire, either with the rake task or through creating a model it creates the mappings but for the inherited models it doesn't apply the custom analyzers to them. If I look at the mappings using
curl -XGET 'http://127.0.0.1:9200/accounts/_mapping?pretty=1'
I get:
{
"accounts" : {
"account" : {
"properties" : {
"id" : {
"type" : "integer",
"include_in_all" : false
},
"name" : {
"type" : "string",
"index_analyzer" : "custom_index_analyzer",
"search_analyzer" : "custom_search_analyzer"
}
}
},
"standard_account" : {
"properties" : {
"id" : {
"type" : "long"
}
"name" : {
"type" : "string"
}
}
},
"super_account" : {
"properties" : {
"id" : {
"type" : "long"
}
"name" : {
"type" : "string"
}
}
}
}
}
Even if I move the mapping declarations to the inherited classes it seems to be only the first model created that picks up the extra options. I can manually create the indexes through ElasticSearch but was wondering if there was a way to do it with Tire? Or do I have something set up incorrectly
You might have figured out the answer already, but I think this might help you, or others having the same problem:
https://stackoverflow.com/a/13660263/1401343
-Vlad
Related
I would like to know how to add an extra field to the response of collection.aggregate?
The query below groups activities by user_id. And I would like to know how to also include the user_name in the response.
DB Models
class Activity
include Mongoid::Document
field :hd_race_id, type: Float
field :metric_elapsed_time, type: Float
field :metric_distance, type: Float
field :user_name, type: String
belongs_to :user
...
class User
include Mongoid::Document
field :user_name, type: String
has_many :activities
...
Query
Activity.collection.aggregate([
{
"$group" => {
"_id" => "$user_id",
"distance" => { "$sum" => "$metric_distance" },
"time" => { "$sum" => "$metric_elapsed_time" },
},
},
{ "$sort" => { "distance" => -1 } },
])
Thank you in advance
Use the operator $first (aggregation accumulator) inside the $group stage.
For example:
"user_name": {"$first": "$user_name"}
or for the programming language you are using (not sure what it is), try something like:
"user_name" => {"$first" => "$user_name"},
For an example, see the "Group & Total" chapter in my Practical MongoDB Aggregations book
I have a model User that I am indexing in ElasticSearch through Searchkick:
class User < ActiveRecord::Base
searchkick callbacks: :async, routing: true
end
Also I have a background job to reindex conditionally.
The problem is that in some case I have repeated indexes for a single user:
2.3.8 :191 > u = User.find_by_email("john.doe#mail.com")
=> #<User id: 401953, email: "john.doe#mail.com", name: "John Doe", university_id: 83, device_id: "b3f3d62839ca6b981ea236562e6da9ff", app_version: "6.7.0>
But I am having repeated searchkick indexes with the same id and _index!
2.3.8 :192 > ap User.search("*", where: {email: "john.doe#mail.com"}, load: false).results
[
[0] {
"_index" => "users_production_20200717081323100",
"_type" => "user",
"_id" => "401953",
"_score" => 1.0,
"_routing" => "123",
"email" => "john.doe#mail.com",
app_version" => "6.6.0",
"id" => "401953"
},
[1] {
"_index" => "users_production_20200717081323100",
"_type" => "user",
"_id" => "401953",
"_score" => 1.0,
"_routing" => "123",
"email" => "john.doe#mail.com",
app_version" => "6.7.0",
"id" => "401953"
}
]
The only attribute that changes is the app_version. In some moment the user had "6.6.0" version and when update to 6.7.0 the index was repeated. In this case, why is happening this? is this the normal behavior of the gem?. Shouldn't it be updated and leave a single index?
Regards
Monogoid pluck returns duplicate embedded results (not concerned about duplicate rows) for embedded fields.
eg: (user is embedded document for SomeModel)
SomeModel.where(condition).pluck(:region, "user.name", "user.lastname")
Results:
[["amr",
{"name" => "mark", "lastname" => "goodman"},
{"name" => "mark", "lastname" => "goodman"}],
["amr",
{"name" => "john", "lastname" => "cena"},
{"name" => "john", "lastname" => "cena"}]
]
I was expecting something like below:
[["amr",
{"name" => "mark"},
{"lastname" => "goodman"}],
["amr",
{"name" => "john"},
{"lastname" => "cena"}]
]
Similarly, if I query multiple fields from embedded doc, it creates that many duplicate hashes.
Not sure if I am doing something wrong here.
I'm not sure why that's the case, but you can get the desired result using map instead of pluck:
SomeModel.where(condition).map { |m| [m.region, m.user.name, m.user.lastname] }
This should give you the results:
[
["amr", "mark", "goodman"],
["amr", "john", "cena"]
]
Or:
SomeModel.where(condition).map do |m|
[m.region, { 'name' => m.user.name }, { 'lastname' => m.user.lastname }]
end
Should give you the results:
[
["amr", { "name" => "mark" }, { "lastname" => "goodman" }],
["amr", { "name" => "john" }, { "lastname" => "cena" }]
]
just trying to loop thru my Story model.. and create JSON object and insert it into Call and send it.. But i'm not sure how to loop thru Stories..
I've done this:
#stories = Array.new
Story.where(newsletter_id: current_user.selected_newsletter) do |story|
#stories << {
:title => story.title,
:image_url => story.image_url
}
end
and i'm trying ti insert the loop to this JSON OBJECT
"message" => {
"attachment" => {
"type" => "template",
"payload" => {
"template_type" => "generic",
"elements" => [{this is the array}]
}
}
}
The array with multiple stories should looks like this:
[
{
"title" => "title....1",
"image" => "image....1"
},
{
"title" => "title....2",
"image" => "image....3"
}
....
]
Try the following:
#stories = Story.where(newsletter_id: current_user.selected_newsletter)
.select(:title, 'image_url as image').as_json(except: :id)
And then:
{
"message" => {
"attachment" => {
"type" => "template",
"payload" => {
"template_type" => "generic",
"elements" => #stories
}
}
}
}
I have list of countries and I whant users be able to sort results by county. So I have this helper:
def facets_for model, field
ul = ""
links = ""
model.facets[field]['terms'].each do |facet|
links << content_tag('li') do
link_to("#{facet['term']} #{facet['count']}", params.merge(field => facet['term']))
end
end
ul << content_tag("ul", class: field) do
links.html_safe
end
ul.html_safe
end
and in model:
class model
....
mapping do
indexes :country do
indexes :name, :type => :string, index: "not_analyzed"
end
end
def self.search params
...
filter :term, destination: params[:destination] if params[:destination].present?
facet("destination") { terms 'country.name' }
...
end
but
facet['term']
always return country name in lowercase. I could make it with Country.find(facet).name but I think it is unnecessary. Is there any way to store in facet same string value as in field?
Updated
my mapping:
{
"wishes" : {
"wish" : {
"properties" : {
"body" : {
"type" : "string"
},
"country" : {
"properties" : {
"name" : {
"type" : "string"
}
}
},
"country_id" : {
"type" : "string"
},
"created_at" : {
"type" : "date",
"format" : "dateOptionalTime"
}
} ... }}}
Your mapping is not created right, you can try to reindex your data.
Model.index.delete # to delete index with bad mapping
Model.create_elasticsearch_index # this should create index with mapping defined in Model
And after that you can try to run Model.import again.