I'm trying to use elasticsearch via tire gem for a multi-tenant app. The app has many accounts with each account having many users.
Now I would like to index the User based on account id.
User Model:
include Tire::Model::Search
mapping do
indexes :name, :boost => 10
indexes :account_id
indexes :company_name
indexes :description
end
def to_indexed_json
to_json( :only => [:name, :account_id, :description, :company_name],
)
end
Search Query:
User.tire.search do
query do
filtered do
query { string 'vamsikrishna' }
filter :term, :account_id => 1
end
end
end
The filter works fine and the results are displayed only for the filtered account id (1). But, when I search for a query string 1:
User.tire.search do
query do
filtered do
query { string '1' }
filter :term, :account_id => 1
end
end
end
Lets say there is an user named 1. In that case, the results are getting displayed for all the users with account id 1 and the user too. This is because I added account_id in the to_indexed_json. Now, how can I display only the user with name 1? (All users should not be available in the hits. Only the user with name 1 should be displayed)
When there are no users with 1 as name or company name or description, I just don't want any results to be displayed. But, in my case as I explained I would get all the users in the account id 1.
You are searching on the _all special field, which contains a copy of the content of all the fields that you indexed. You should search on a specific field to have the right results like this: field_name:1.
If you want you can search on multiple fields using the query string:
{
"query_string" : {
"fields" : ["field1", "field2", "field3"],
"query" : "1"
}
}
Related
I have a User, and a Country models, and in User model, it has has_many :countries relationship. The country has 3 attributes -> :id, :name, :is_main
Let's say the countries table is currently populated as follow:
id name is_main
----------------------------------
1 USA true
2 Germany nil
3 France nil
4 England nil
Let's say a user is created that has countries USA, and Germany. So in this case the user.countries.pluck(:id) would return [1,2].
What I would like to achieve is that when editing the user's countries, a dropdown will appear and I am only allowed to add (or remove) countries where 'is_main' attribute is nil. In other words, in the dropdown, the USA country should either be disabled or be hidden completely to be selected. At the same time, the USA should remain in the user.countries after form submission plus any new countries that have been added from update action.
In short:
Original => user.countries.pluck(:id) => [1,2]
In edit form, if I add France to the user, the end result should be user.countries.pluck(:id) => [1,2,3]
I have tried the following:
f.collection_select(:countries, Country.where(is_main: nil), :id, :name, {}, {:multiple => true})
In doing this, the dropdown will display all the countries for me to add except for USA, which is what I desired. But the problem is when I click submit, the params[:country_ids] will be [2, 3]. As a result, after update action, the user.countries.pluck(:id) would become [2,3] instead of desired [1,2,3], effectively removing USA's id.
Is there a way to work around this? I have tried also adding :disabled option in collection_select that disables USA option, but the params[:country_ids] would still be [nil, 2, 3]. Appreciate if anyone could advise me on this.
I've come across this issue on my application. In rails 4+, set include_hidden attribute to false in multi-select selet_tags. Here's how i did it.
= f.input :countries, as: :select, collection: Country.where(is_main: nil).map{|country| [country.id, country.name]}, input_html: { multiple: true, data: { placeholder: "Countries"} }, include_hidden: false
Hope this helps you solve your issue
Actually using your example for multiple selection, your params after submit should have params[:country_ids] = [2,3]. If you are not giving the option to select USA, then it just won't be included.
You can disable options doing this:
// Your select id should be 'user_country_ids', if not change it
// This assumes that your first item will always be USA
$('#user_country_ids option:eq(0)').prop('disabled', true);
Anyway, this JS can be bypassed, so you will need to handle this on your backend too adding a validation for this in your User model:
class User
has_many :countries
validate :main_countries_always_associateds
private
def main_countries_always_associateds
errors.add(:countries, "must include main ones") if Country.where(is_main: true).any?{ |c| !self.countries.include? c }
end
end
EDIT:
If you want to always have main countries inside each user countries, then you can use before_validation callback to override this selection. I recommend to also include the disabled options on the multiple select, so the user is aware of this, the validation may become not necessary.
class User
has_many :countries
before_validation :associate_main_countries
# You should use something like this if you want your dropdown to always show marked main countries
def self.new_custom_user
User.new(countries: Country.where(is_main: true))
end
private
def associate_main_countries
Country.where(is_main: true).each do |c|
self.countries << c if !self.countries.include?(c)
end
end
end
I have a Rails App in which I want to use Thinking Sphinx for search. I have a has many though relationship between the following models, Product has many Types through ProductType.
# Product.rb
has_many :product_types
has_many :types, through: :product_types
# Type.rb
has_many :product_types
has_many :products, through: :product_types
# ProductType.rb
belongs_to :product
belongs_to :type
In my ProductsController index action I want to be able to filter which products are shown in the view based on given Variant ids.
My relevant indexes currently looks like this (note, I haven't used ThinkingSphinx in a long time):
# product_index.rb
ThinkingSphinx::Index.define :product, :with => :active_record do
indexes name, :sortable => true
indexes description
indexes brand.name, as: :brand, sortable: true
indexes product_types.type.id, as: :product_types
has created_at, updated_at
end
# type_index.rb
ThinkingSphinx::Index.define :type, :with => :active_record do
indexes name, :sortable => true
end
# product_type_index.rb
ThinkingSphinx::Index.define :product_type, :with => :active_record do
has product_id, type: :integer
has type_id, type: :integer
end
I currently pass an array of :product_types ids in a link_to, like this (let me know if there is a better way to do it):
= link_to "Web shop", products_path(product_types: Type.all.map(&:id), brand: Brand.all.map(&:id)), class: "nav-link"
In my ProductsController I try to filter the result based on the given Type ids like this:
product_types = params[:product_types]
#products = Product.search with_all: { product_types: product_types.collect(&:to_i) }
When I run rake ts:rebuild I get the following error:
indexing index 'product_type_core'...
ERROR: index 'product_type_core': No fields in schema - will not index
And when I tries to view the view in the browser I get the following error:
index product_core: no such filter attribute 'product_types'
- SELECT * FROM `product_core` WHERE `sphinx_deleted` = 0 AND
`product_types` = 1 AND `product_types` = 2 AND `product_types` = 3
LIMIT 0, 20; SHOW META
Any ideas in how to properly set up my indexes (and query) for this case?
There's a few issues to note here:
Firstly, the error you're seeing during rake ts:rebuild is pointing out that you've not set any fields in your ProductType Sphinx index - no indexes calls for text data you wish to search on. Are you actually searching on ProductType at all? If so, what text are you expecting people to match by?
If you're not searching on that model, there's no need to have a Sphinx index for it.
Secondly, the issue with your search - you're filtering on product_types with integers, which makes sense. However, in your index, you've defined product_types as a field (using indexes) rather than an attribute (using has). Given it's integer values and you're likely not expecting someone to type in an ID into a search input, you'll almost certainly want this to be an attribute instead - so change the indexes to a has for that line in your Product index definition, and run ts:rebuild.
In my multi-tenant app (account based with number of users per account), how would I update index for a particular account when a user document is changed.
Using Elasticsearch via Tire gem.
Rails 2.3 app - applied changes to enable support for Rails 2.3 as per loe/tire's commit
Account Model:
include Tire::Model::Search
Tire.index('account_1') do
create(
:mappings => {
:user => {
:properties => {
:name => { :type => :string, :boost => 10 },
:company_name => { :type => :string, :boost => 5 }
}
},
:comments => {
:properties => {
:description => { :type => :string, :boost => 5 }
}
}
}
)
end
As you can see above, there are two models here user and comments. Is it the correct way to address single index with multiple models.
In that case how do I update index when a user document or comment document alone is changed?
Usually when you are indexing a model it is good to index the self attributes along with its associations. So in this case if you want index users and their commments, you should have the index in the user model and index the comments referenced by its association so that tire callbacks apply on the user model to reindex the user object if any attributes in the model are changed. This is only for the model on which you have the index on.
If at all you want to index associations, you need to have hooks that will index the account object after save/ after destroy of user/comments model. Or you could also use :touch => true option to touch the account model on change of user/comments.
Example: if you want index user and comments,
include Tire::Model::Search
include Tire::Model::Callbacks
mapping do
indexes :id, :type => 'integer', :index => :not_analyzed
indexes :about_me, :type => 'string', :index => :snowball
indexes :name, :type => 'string', :index => :whitespace
indexes :comments do
indexes :content, :type => 'string', :analyzer => 'snowball'
end
end
So here the index is on the user model and user.comments is an association. Hope this example explains
The answer to the question as posted by Tire owner Karmi is as follows:
Let's say we have an Account class and we deal in articles entities.
In that case, our Account class would have following:
class Account
#...
# Set index name based on account ID
#
def articles
Article.index_name "articles-#{self.id}"
Article
end
end
So, whenever we need to access articles for a particular account, either for searching or for indexing, we can simply do:
#account = Account.find( remember_token_or_something_like_that )
# Instead of `Article.search(...)`:
#account.articles.search { query { string 'something interesting' } }
# Instead of `Article.create(...)`:
#account.articles.create id: 'abc123', title: 'Another interesting article!', ...
Having a separate index per user/account works perfect in certain cases -- but definitely not well in cases where you'd have tens or hundreds of thousands of indices (or more). Having index aliases, with properly set up filters and routing, would perform much better in this case. We would slice the data not based on the tenant identity, but based on time.
Let's have a look at a second scenario, starting with a heavily simplified curl http://localhost:9200/_aliases?pretty output:
{
"articles_2012-07-02" : {
"aliases" : {
"articles_plan_pro" : {
}
}
},
"articles_2012-07-09" : {
"aliases" : {
"articles_current" : {
},
"articles_shared" : {
},
"articles_plan_basic" : {
},
"articles_plan_pro" : {
}
}
},
"articles_2012-07-16" : {
"aliases" : {
}
}
}
You can see that we have three indices, one per week. You can see there are two similar aliases: articles_plan_pro and articles_plan_basic -- obviously, accounts with the “pro” subscription can search two weeks back, but accounts with the “basic” subscription can search only this week.
Notice also, that the the articles_current alias points to, ehm, current week (I'm writing this on Thu 2012-07-12). The index for next week is just there, laying and waiting -- when the time comes, a background job (cron, Resque worker, custom script, ...) will update the aliases. There's a nifty example with aliases in “sliding window” scenario in the Tire integration test suite.
Let's not look on the articles_shared alias right now, let's look at what tricks we can play with this setup:
class Account
# ...
# Set index name based on account subscription
#
def articles
if plan_code = self.subscription && self.subscription.plan_code
Article.index_name "articles_plan_#{plan_code}"
else
Article.index_name "articles_shared"
end
return Article
end
end
Again, we're setting up an index_name for the Article class, which holds our documents. When the current account has a valid subscription, we get the plan_code out of the subscription, and direct searches for this account into relevant index: “basic” or “pro”.
If the account has no subscription -- he's probably a “visitor” type -- , we direct the searches to the articles_shared alias. Using the interface is as simple as previously, eg. in ArticlesController:
#account = Account.find( remember_token_or_something_like_that )
#articles = #account.articles.search { query { ... } }
# ...
We are not using the Article class as a gateway for indexing in this case; we have a separate indexing component, a Sinatra application serving as a light proxy to elasticsearch Bulk API, providing HTTP authentication, document validation (enforcing rules such as required properties or dates passed as UTC), and uses the bare Tire::Index#import and Tire::Index#store APIs.
These APIs talk to the articles_currentindex alias, which is periodically updated to the current week with said background process. In this way, we have decoupled all the logic for setting up index names in separate components of the application, so we don't need access to the Article or Account classes in the indexing proxy (it runs on a separate server), or any component of the application. Whichever component is indexing, indexes against articles_current alias; whichever component is searching, searches against whatever alias or index makes sense for the particular component.
I have implemented solr search for my rails application. I have indexed the fields for search and it is working perfectly. Now i want to exclude one particular field named Title while searching.How can i skip this particular field while searching. Is there any exclude options for the indexed text fields also.
searchable do
integer :id
boolean :searchable
boolean :premium
string :status
time :updated_at
time :created_at
###################################################
# Fulltext search fields
text :title
text :summary
text :skills
end
Here how can i exclude only the Title field from the Fulltext search.like
profiles = Profile.search do |s|
s.fulltext #selected_filters[:query][:value] , exclude => :title
end
Is there any way to do like this? please help
You can specify which fields to include in your search
Profile.search do
keywords #selected_filters[:query][:value], :fields => [:summary, :skills], :minimum_match => 1
end
I have a situation where I'm importing feeds from multiple sources, and have to deal with the fact that attributes are not consistant across feeds. So for an example of some attributes, my site would like to recognize these in a product:
name, description, category, url
And then have Feed 1 map up, even though it's attributes are:
product_name, descript, category_primary, product_url
And Feed 2 has attributes:
Product, Description, CategoryFirst, URLToProduct
So I can import all products into a mongoid table using the field names dynamically as they appear in the feed. I could then have a page that allows a admin to map feed attribute names to global attribute names. But what is the easiest way to map the global attribute names to the feed specific ones? In other words I'd like to say Feed.find(id).products.name and have it retrieve the value given the feed specific attribute name (whether that is "product_name" or "Product" or whatever else_
You can have a different model to allow admin to configure feeds. This model just saves the field names of incoming feed attributes mapping to your product and converts a feed item to a product.
class FeedToProductConverter
include Mongoid::Document
field :converter_name, :type => String
field :name, :type => String
field :description, :type => String
field :category, :type => String
field :url, :type => String
def convert(feed_item)
p = Product.new
p.name = feed_item[self.name]
p.description = feed_item[self.description]
p.category = feed_item[self.category]
p.url = feed_item[self.url]
p
end
def convert_and_save(feed_item)
convert(feed_item).save
end
end
Now you take inputs from admin and store them. Whenever admin wants to import a feed, s/he selects the appropriate converter and in the backend you utilize it to convert the feed items to your own products. Using any product just like as you desire later on.