Im migrating from Tire to Flex
Basic search and index-sync works
I figured the flex.parent line in the model would auto-create the _parent mapping, but it crashes
I was not able to find any parent/child demo project.
Flex.yml:
settings:
number_of_shards: 5
number_of_replicas: 1
# analysis:
# analyzer:
# tokenizer:
mappings:
userprofile:
startdatef:
type: 'date'
format: 'dateOptionalTime'
fields:
index: 'not_analyzed'
untouched:
type: 'date'
index: 'not_analyzed'
orgunit:
org_name:
type: 'string'
index: 'analyzed'
search_analyzer: orgunit_name_search
index_analyzer: orgunit_name_index
untouched:
type: 'string'
index: 'not_analyzed'
Parent model:
class Userprofile < ActiveRecord::Base
include Flex::ModelIndexer
include Flex::Model
flex.sync self
has_many :assignments,
-> { order(startdate: :desc) }, dependent: :restrict_with_exception
module Search
include Flex::Scopes
flex.context = Userprofile
scope :alla, query([])
end
# rubocop:disable all
def flex_source
{
id: id,
fullname: fullname,
firstname: firstname,
lastname: lastname,
pnr: pnr,
gender: gender,
asscount: asscount,
created_at: created_at,
updated_at: updated_at,
user_id: user_id,
creator_id: creator_id,
}
end
# rubocop:enable all
end
Child model:
class Assignment < ActiveRecord::Base
include Flex::ModelIndexer
include Flex::Model
flex.parent :userprofile, 'userprofile' => 'assignment' # This makes indexing break
flex.sync self, :userprofile
belongs_to :userprofile, counter_cache: true, touch: true
module Search
include Flex::Scopes
flex.context = Assignment
scope :alla, query([])
end
def flex_source
{
# _parent_id: userprofile_id,
userprofile_id: userprofile_id,
created_at: created_at,
updated_at: updated_at
}
end
end
rake flex:import
Model Userprofile: Processing 37 documents in batches of 1000:
processing...: 100% ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| Time: 0:00:01
Processed 37. Successful 37. Skipped 0. Failed 0.
Model Assignment: Processing 36 documents in batches of 1000:
rake aborted!: 0% | | ETA: --:--:--
activerecord-4.0.2/lib/active_record/relation/batches.rb:75:in find_in_batches'
400: {"error":"ElasticSearchIllegalArgumentException[Can't specify parent if no parent field has been configured]","status":400}
flex-1.0.6/lib/flex/template.rb:54:indo_render'
...
Tasks: TOP => flex:import
In your mapping you have to specify the "parent" field. Only then ES can link the ID in the "_parent" field you want to index to the corresponding index-type. Try this as a template (substitute the variables apropriate to your index). It just adds the infos to the mapping, not removing existing ones:
curl -XPUT 'http://localhost:9200/$yourIndex/$yourChildTypeName/_mapping' -d '
{
"$yourChildTypeName" : {
"_parent" : {
"type" : "$yourParentTypeName"
}
} '
Then try indexing again.
Related
Let's say I have a product that has many samples and each sample has many references.
At some point in the code I get a ruby hash (representing a product with nested attributes: samples > references) like this :
{
name : '',
category: 'toy',
shop_id: '2',
errors: {
name: 'Cannot be empty'
},
samples_attributes: [
{
name: 'Big',
errors: {},
references_attributes: [
{
quantity: 2,
price: nil,
errors: {price: 'cannot be nil'}
}
]
},
{
name: '12',
errors: {name: 'cannot be numerical'},
references_attributes: [
{
quantity: -2,
price: 12,
errors: {quantity: 'cannot be negative'}
}
]
}
}
]
}
I would like to find a clean way / trick to know if each of the items got the :errors attribute empty or not. It could return true or false in a method #nested_resources_got_errors?
For sure I could do that nesting .each methods but it is dirty:
def the_nested_resource_tree_has_errors?(product)
!product[:errors].any? &&
product[:samples_attributes].each do |sample|
!sample[:errors].any? &&
sample[:references_attributes].each do |reference|
!reference[:errors].any?
end
end
end
The code has to be plain ruby.
We can make this into a generic recursive check. Using any? makes this more efficient, it will stop as soon as it finds an error.
def errors?(hash)
# Does it have errors?
return true if hash[:errors].present?
# Iterate through each value until an error is found.
return hash.any? do |_, value|
case
# Hash values are recursively checked.
when value.is_a?(Hash)
errors?(value)
# Iterate through each element of an enumerable value.
when value.is_a?(Enumerable)
value.any? { |v| errors?(v) }
end
end
end
But this, and many other things, would be easier if these were objects. Set up some models. Each only needs to deal with itself and its immediate objects.
# A module for the error attribute and detecting errors.
module HasErrors
extend ActiveSupport::Concern
included do
attr_accessor :errors
end
def errors?
errors.present?
end
end
class Product
include ActiveModel::Model
include HasErrors
attr_accessor :name, :category, :shop_id, :samples
# check itself, then check its samples
def errors?
super || samples.any? { |sample| sample.errors? }
end
end
class Sample
include ActiveModel::Model
include HasErrors
attr_accessor :name, :references
# check itself, then check its references
def errors?
super || references.any? { |ref| ref.errors? }
end
end
class Reference
include ActiveModel::Model
include HasErrors
attr_accessor :quantity, :price
end
Then you can put the data into objects and call product.errors?.
product = Product.new(
name: '',
category: 'toy',
shop_id: '2',
errors: {
#name: 'Cannot be empty'
},
samples: [
Sample.new(
name: 'Big',
errors: {},
references: [
Reference.new(
quantity: 2,
price: nil,
errors: {
price: 'cannot be nil'
}
)
]
),
Sample.new(
name: '12',
errors: {
name: 'cannot be numerical'
},
references: [
Reference.new(
quantity: -2,
price: 12,
errors: {
quantity: 'cannot be negative'
}
)
]
)
]
)
p product.errors?
If we have models, we can use validation and the errors are not necessary.
module ValidateList
extend ActiveSupport::Concern
class_methods do
def validate_list(attribute)
validates_each attribute do |record, attr, values|
# Check if any element of the list is invalid.
record.errors.add(attr, :invalid) if values.any?(&:invalid?)
end
end
end
end
class Product
include ActiveModel::Model
include HasName
include ValidateList
attr_accessor :category, :shop_id, :samples
validate_list :samples
end
class Sample
include ActiveModel::Model
include HasName
include ValidateList
attr_accessor :references
validate_list :references
end
class Reference
include ActiveModel::Model
attr_accessor :quantity, :price
validates :price, :quantity,
presence: true,
numericality: {
greater_than_or_equal_to: 0
}
end
p product.valid?
p product.errors.details
Using validations will allow these models to work well with other parts of Rails.
I have a Rails4 app using elasticsearch and searchkick for a sitewide search page.
I have configured the models and its associations using searchkick search_data, but its not working as per my requirement where user can search all venues by location, name, capacity, event(marriage, engagements etc.) and food_type(veg/non-veg). So precisely when i search for venues with required params, i still didnt get the desired result.
Below is the elasticsearch query in the server...
[SITE WIDE SEARCH] **************SEARCH QUERY***["new", "Mumbai, Maharashtra", "3000", "VEG", "BUSINESS MEETING"]******************
Venue Search (7.2ms) curl http://localhost:9200/venues_development/_search?pretty -d '{"query":{"bool":{"must":{"dis_max":{"queries":[{"match":{"_all":{"query":"Mumbai, Maharashtra","boost":10,"operator":"and","analyzer":"searchkick_search"}}},{"match":{"_all":{"query":"Mumbai, Maharashtra","boost":10,"operator":"and","analyzer":"searchkick_search2"}}},{"match":{"_all":{"query":"Mumbai, Maharashtra","boost":1,"operator":"and","analyzer":"searchkick_search","fuzziness":1,"prefix_length":0,"max_expansions":3,"fuzzy_transpositions":true}}},{"match":{"_all":{"query":"Mumbai, Maharashtra","boost":1,"operator":"and","analyzer":"searchkick_search2","fuzziness":1,"prefix_length":0,"max_expansions":3,"fuzzy_transpositions":true}}}]}},"filter":[{"range":{"capacity_in_persons":{"to":"3000","include_upper":true}}},{"term":{"food_type":"VEG"}},{"term":{"event_type_name":"BUSINESS MEETING"}}]}},"size":1000,"from":0,"timeout":"11s","_source":false}'
models/venue.rb:
##columns of venue.rb
##=> Venue(id: integer, name: string, description: text,active: boolean, announcements_count: integer, comments_count: integer, pictures_count: integer, videos_count: integer, created_at: datetime, updated_at: datetime, capacity_in_persons: string, workflow_state: string)
### associations
has_many :event_categories
has_many :event_types, through: :event_categories
has_one :address, :as=> :addressable, :dependent => :destroy
###elasticsearch config starts here
searchkick word_middle:["name^10", :slug, :capacity_in_persons], locations: ["location"]
def search_data
{
name: name, analyzer: 'english', #: :word_start, misspellings: false},
capacity_in_persons: capacity_in_persons,
food_type: food_type,
slug: slug,
##has many event types
event_type_name: event_types.map(&:name),
ratings: ratings.map(&:stars),
##location: [self.address.latitude, self.address.longitude],
location: [self.address.latitude, self.address.longitude],
picture_url: pictures.select{|p| p == pictures.last}.map do |i|{
original: i.original_url
}
end
}
end
##to eager load other associations
scope :search_import, -> { includes(:address, :event_types, :pricing_details, :ratings) }
after_save :reindex if try(:address).present?
###### controller action ##########
#query = []
#query << params[:venue_name] if params[:venue_name].present?
#query << params[:address] if params[:address].present?
#query << params[:venue_capacity_in_persons] if params[:venue_capacity_in_persons].present?
#query << params[:food_type] if params[:food_type].present?
#query << params[:event_name] if params[:event_name].present?
#query = #query.flatten.compact
logger.tagged("SITE WIDE SEARCH"){ logger.info "**************SEARCH QUERY***#{#query}******************" }
##TODO-add constraints to handle range for capacity in persons
####halls = Hall.get_completed_halls_only.paginate(:page => params[:page]).search(#query).results
###halls = Hall.get_completed_halls_only.search(#query)
#halls = Hall.get_completed_halls_only.search params[:address],
where: {
capacity_in_persons: {lte: params[:venue_capacity_in_persons]},
food_type: params[:food_type],
event_type_name: params[:event_name]
}
I have added changes to my address.rb to find venue nearby and i think it working fine but any suggestions are welcome.address.rb is polymorphic and has address_1 attribute to store address coming from google dropdown along with geocoding from geocoder gem.
models/address.rb
###address belongs_to venue
searchkick locations: [:address_1]
## call Address.reindex if this method is changed
def search_data
attributes.except("id").merge(
address_1: {lat: latitude, lon: longitude},
city: city,
state: state,
zipcode: zipcode ##unless addressable_type == "Hall"
)
end
In my app I had BlogPost model and User model that are related through relation named author. To serve data from my Rails app I use active_model_serializers with definition:
class Blog::PostSerializer < ActiveModel::Serializer
embed :ids, include: true
attributes :id, :title, :text, :created_at, :updated_at
has_one :author
has_many :assets
end
When I fetch this using Ember model:
Admin.BlogPost = DS.Model.extend({
author: DS.belongsTo('User'),
title: DS.attr('string'),
text: DS.attr('string'),
createdAt: DS.attr('date'),
updatedAt: DS.attr('date')
});
There is an error:
Uncaught Error: Assertion Failed: You looked up the 'author' relationship on a 'blog.post' with id 1 but some of the associated records were not loaded. Either make sure they are all loaded together with the parent record, or specify that the relationship is async (`DS.belongsTo({ async: true })`)
Which is caused by that my response looks like:
{
'blog_posts': [
{
id: 1,
author_id: 1
},
// …
],
'authors': [
{ id: 1, /* … */ }
]
}
Is there any way to change 'authors' in response to 'users' or use 'authors' as alias to 'users' in serializer?
From active_model_serializers 0.8 description: https://github.com/rails-api/active_model_serializers/tree/0-8-stable
You can also specify a different root for the embedded objects than the key used to reference them:
class PostSerializer < ActiveModel::Serializer
embed :ids, :include => true
attributes :id, :title, :body
has_many :comments, :key => :comment_ids, :root => :comment_objects
end
This would generate JSON that would look like this:
{"post": {
"id": 1,
"title": "New post",
"body": "A body!",
"comment_ids": [ 1 ]
},
"comment_objects": [
{ "id": 1, "body": "what a dumb post" }
]
}
Just define a method in your serializer named users and return authors in it I.e.
attributes :id, :title, :text, :created_at, :updated_at, :users
def users
object.authors
end
This trick works with "has_many" relation, but fails with "embeds_many". Any ideas?
class Country
include Mongoid::Document
field :name, type: String
embeds_many :cities
end
class City
include Mongoid::Document
field :name, type: String
field :full_name, type: String, default: ->{ "#{name}, #{country.name}" }
embedded_in :country
end
1.9.3p392 :025 > c = Country.find_or_create_by(name: 'foo')
=> #<Country _id: foo, name: "foo">
1.9.3p392 :026 > c.cities.find_or_create_by(name: 'bar')
NoMethodError: undefined method `city' for nil:NilClass
So, it fails on a line "field :full_name, type: String, default: ->{ "#{name}, #{country.name}" }" becouse country is undefined for that moment
You need to check for country first, then it will return country.name
field :full_name, type: String, default: ->{ "#{name}, " << country.name if country }
I could not get this to work with string interpolation, but append works (which concatenates country.name to str)
I'm trying to index a geo_point field in Elasticsearch with the Tire gem. Here is my Tire mapping for my ActiveRecord model :
class Availability < ActiveRecord::Base
belongs_to :user
attr_accessible :date, :latitude, :longitude
include Tire::Model::Search
include Tire::Model::Callbacks
tire do
mapping do
indexes :id, type: 'integer', index: 'not_analysed'
indexes :user_id, type: 'integer', index: 'not_analysed'
indexes :user_firstname, type: 'string', as: 'user_firstname'
indexes :user_lastname, type: 'string', as: 'user_lastname'
indexes :user_level, type: 'integer', as: 'user_level'
indexes :date, type: 'date'
indexes :location, type: 'geo_type', as: 'location'
end
end
# def location
# "#{latitude},#{longitude}"
# end
def location
[longitude.to_f, latitude.to_f]
end
def user_firstname
user.firstname
end
def user_lastname
user.lastname
end
def user_level
user.level
end
end
When I create the mapping (bundle exec rake environment tire:import CLASS=Availability FORCE=true), Elasticsearch seems to ignore the geo_point type for the location field.
Here is the result of the http://localhost:9200/availabilities/_mapping call :
{
availabilities: {
availability: {
properties: {
date: {...},
id: {...},
location: {
type: "double"
},
user_firstname: {...},
user_id: {...},
user_lastname: {...},
user_level: {...}
}
}
}
}
The location field is indexed as an array of double on the documents (results of http://localhost:9200/availabilities/_search) :
{
id: 8,
...
location: [
2.301643,
48.780651
]
}
When I change the location method to :
def location
"#{latitude},#{longitude}"
end
Which is another solution to index a geo_point field according to the documentation (http://www.elasticsearch.org/guide/reference/mapping/geo-point-type.html), the result for the location mapping is :
location: {
type: "string"
},
And of course the location field is indexed as a string :
{
id: 4,
...
location: "48.780651,2.301643"
}
Any idea why the geo_point is ignored in my mapping ?
Thanks !
The location index type was mistyped.
You used geo_type instead of geo_point:
indexes :location, type: 'geo_point', as: 'location'