Elasticsearch different behaviour on test server - ruby-on-rails

My elasticsearch is currently giving different results on different environments even though I'm doing the same search.
It works fine in development on my localhost, however it doesn't work on my test server (doesn't give expected records, yes I do have the database seeded).
Far as I understand what this should do is check whether it finds a hit on one of the three matches, and if it does return all the hits.
I'm running Windows 10, just using rails s.
The server is running Ubuntu 16, using nginx and unicorn.
Here's my mapping: (note: I'm not completely sure whether the analyzer does anything but it shouldn't matter)
settings index: { number_of_shards: 1 } do
mappings dynamic: 'true' do
indexes :reportdate, type: 'date'
indexes :client do
indexes :id
indexes :name, analyzer: 'dutch'
end
indexes :animal do
indexes :id
indexes :species, analyzer: 'dutch'
indexes :other_species, analyzer: 'dutch'
indexes :chip_code
end
indexes :locations do
indexes :id
indexes :street, analyzer: 'dutch'
indexes :city, analyzer: 'dutch'
indexes :postalcode
end
end
end
Here's my search:
__elasticsearch__.search({
sort: [
{ reportdate: { order: "desc" }},
"_score"
],
query: {
bool: {
should: [
{ multi_match: {
query: query,
type: "phrase_prefix",
fields: [ "other_species", "name"]
}},
{ prefix: {
chip_code: query
}},
{ match_phrase: {
"_all": {
query: query,
fuzziness: "AUTO"
}
}}
]
}
}
})
EDIT #1: Note: I'm fairly new to ruby on rails, started about 2 weeks ago, doing maintenance work on an old project and they also requested a search function.

Turns out that the problem was that I was using foreign tables (well, kinda) and nested mapping (probably this).
Here's the updated code that works on both production and locally:
__elasticsearch__.search({
sort: [
{ reportdate: { order: "desc" }},
"_score"
],
query: {
bool: {
should: [
{ multi_match: {
query: query,
type: "phrase_prefix",
fields: [ "animal.other_species", "client.name"]
}},
{ prefix: {
"animal.chip_code": query
}},
{ match_phrase: {
"_all": {
query: query,
fuzziness: "AUTO"
}
}}
]
}
}
})
Not sure why it doesn't need the animal and client parents preappended to work locally whilst it does need them on my testing server. However this works on both this way.

Related

Elasticsearch ngram index returns nothing

I'm attempting to build a custom analyzer using nGram and apparently it's working ok, I just can't query it for some reason.
I'm using `elasticsearch-model in Ruby
Here is how the index is defined:
include Elasticsearch::Model
index_name "stemmed_videos"
settings index: { number_of_shards: 5 },
analysis: {
analyzer: {
video_analyzer: {
tokenizer: :stemmer,
filter: [
"lowercase"
]
},
standard_lowercase: {
tokenizer: :standard,
filter: [
"lowercase"
]
}
},
tokenizer: {
stemmer: {
type: "nGram",
min_gram: 2,
max_gram: 10,
token_chars: [
"letter",
"digit",
"symbol"
]
}
}
} do
mappings do
indexes :title, type: 'string', analyzer: 'video_analyzer'
indexes :description, type: 'string', analyzer: 'standard_lowercase'
end
end
def as_indexed_json(options = {})
as_json(only: [:title, :description])
end
I've attempted to take one of the strings I'm trying to index and run it through "http://localhost:9200/stemmed_videos/_analyze?pretty=1&analyzer=video_analyzer&text=indiana_jones_4-tlr3_h640w.mov" and it's apparently doing the right thing.
But then, the only way I have to make a generic query is by adding wildcards, which is not what I'm expecting.
[8] pry(main)> Video.__elasticsearch__.search('*ind*').results.total
=> 4
[9] pry(main)> Video.__elasticsearch__.search('ind').results.total
=> 0
(4 is the right number of results in my test data).
What I'd love to accomplish is to get the right results without the wildcards because with what I have now I'd need to take the query string and add the wildcards in the code, which honestly is rather bad.
How can I accomplish this?
Thanks in advance.

Elasticsearch date decay function, Rails

I'm doing a simple query, with multiple fields, and trying to apply a decay function based on how many days old the given document is. The following query is my attempt:
{
query: {
function_score:{
query: {
multi_match: {
query: query,
fields: ['name', 'location']
},
functions: [{
gauss: {
created_at: {
origin: 'now',
scale: '1d',
offset: '2d',
decay: 0.5
}
}
}]
}
}
}
}
With the following mapping:
mappings dynamic: 'false' do
indexes :name, analyzer: 'english'
indexes :location, analyzer: 'english'
indexes :created_at, type: 'date'
end
Gives the following error:
[400]
{"error":{"root_cause":[{"type":"query_parsing_exception","reason":"No
query registered for
[gauss]","index":"people","line":1,"col":143}],"type":"search_phase_execution_exception","reason":"all
shards
failed","phase":"query_fetch","grouped":true,"failed_shards":[{"shard":0,"index":"jobs","node":"abcdefgZq1PMsd882foA","reason":{"type":"query_parsing_exception","reason":"No
query registered for
[gauss]","index":"people","line":1,"col":143}}]},"status":400}
The functions need to go one level higher, just inside the function_score and not inside the query, like this:
{
query: {
function_score:{
functions: [{
gauss: {
created_at: {
origin: 'now',
scale: '1d',
offset: '2d',
decay: 0.5
}
}
}],
query: {
multi_match: {
query: query,
fields: ['name', 'location']
}
}
}
}
}

Find model with part of title using ElasticSearch / Rails

There is the following Post model:
class Post < ActiveRecord::Base
include Elasticsearch::Model
include Elasticsearch::Model::Callbacks
def self.search query
__elasticsearch__.search(
{
query: {
multi_match: {
query: query,
fields: ['title']
}
},
filter: {
and: [
{ term: { deleted: false } },
{ term: { enabled: true } }
]
}
}
)
end
settings index: { number_of_shards: 1 } do
mappings dynamic: 'false' do
indexes :title, analyzer: 'english'
end
end
end
Post.import
I have one Post with 'Amsterdam' title. When I execute Post.search('Amsterdam') I will get one record, all is good. But if I execute Post.search('Amster') I will get no records. What do I wrong? How can I fix it? Thanks!
OS - OS X, ElasticSearch I installed using Homebrew
You will have to use nGram tokenizer, in order to create a partial text search. A very good example of how to do this can be found here. That said, I would be very careful with nGram, as it can often turn up unrelated results.
This is because the substring "mon" is contained within all of the strings: "monkey", "money", and "monday". All of which are unrelated.
Alternatively (What I would do.)
You could try making it a fuzzy search. However, the max distance with fuzzy search is only two, which still doesn't return anything in your example. However, it tends to return relevant results.
The example I found: How to use Fuzzy Search
# Perform a fuzzy search!
POST /fuzzy_products/product/_search
{
"query": {
"match": {
"name": {
"query": "Vacuummm",
"fuzziness": 2,
"prefix_length": 1
}
}
}
}

Why does this elasticsearch/tire code not match partial words?

I'm trying to use Elasticsearch and Tire to index some data. I want to be able to search it on partial matches, not just full words. When running a query on the example model below, it will only match words in the "notes" field that are full word matches. I can't figure out why.
class Thingy
include Tire::Model::Search
include Tire::Model::Callbacks
# has some attributes
tire do
settings analysis: {
filter: {
ngram_filter: {
type: 'nGram',
min_gram: 2,
max_gram: 12
}
},
analyzer: {
index_ngram_analyzer: {
type: 'custom',
tokenizer: 'standard',
filter: ['lowercase']
},
search_ngram_analyzer: {
type: 'custom',
tokenizer: 'standard',
filter: ['lowercase', 'ngram_filter']
}
}
} do
mapping do
indexes :notes, :type => "string", boost: 10, index_analyzer: "index_ngram_analyzer", search_analyzer: "search_ngram_analyzer"
end
end
end
def to_indexed_json
{
id: self.id,
account_id: self.account_id,
created_at: self.created_at,
test: self.test,
notes: some_method_that_returns_string
}.to_json
end
end
The query looks like this:
#things = Thing.search page: params[:page], per_page: 50 do
query {
boolean {
must { string "account_id:#{account_id}" }
must_not { string "test:true" }
must { string "#{query}" }
}
}
sort {
by :id, 'desc'
}
size 50
highlight notes: {number_of_fragments: 0}, options: {tag: '<span class="match">'}
end
I've also tried this but it never returns results (and ideally I'd like the search to apply to all fields, not just notes):
must { match :notes, "#{query}" } # tried with `type: :phrase` as well
What am I doing wrong?
You almost got there! :) The problem is that you've swapped the role of index_analyzer and search_analyzer, in fact.
Let me explain briefly how it works:
You want to break document words into these ngram "chunks" during indexing, so when you are indexing a word like Martian, it get's broken into: ['ma', 'mar', 'mart', ..., 'ar', 'art', 'arti', ...]. You can try it with the Analyze API: http://localhost:9200/thingies/_analyze?text=Martian&analyzer=index_ngram_analyzer.
When people are searching, they are already using these partial ngrams, so to speak, since they search for "mar" or "mart" etc. So you don't break their phrases further with the ngram tokenizer.
That's why you (correctly) separate index_analyzer and search_analyzer in your mapping, so Elasticsearch knows how to analyze the notes attribute during indexing, and how to analyse any search phrase against this attribute.
In other words, do this:
analyzer: {
index_ngram_analyzer: {
type: 'custom',
tokenizer: 'standard',
filter: ['lowercase', 'ngram_filter']
},
search_ngram_analyzer: {
type: 'custom',
tokenizer: 'standard',
filter: ['lowercase']
}
}
Full, working Ruby code is below. Also, I highly recommend you to migrate to the new elasticsearch-model Rubygem, which contains all important features of Tire and is actively developed.
require 'tire'
Tire.index('thingies').delete
class Thingy
include Tire::Model::Persistence
tire do
settings analysis: {
filter: {
ngram_filter: {
type: 'nGram',
min_gram: 2,
max_gram: 12
}
},
analyzer: {
index_ngram_analyzer: {
type: 'custom',
tokenizer: 'standard',
filter: ['lowercase', 'ngram_filter']
},
search_ngram_analyzer: {
type: 'custom',
tokenizer: 'standard',
filter: ['lowercase']
}
}
} do
mapping do
indexes :notes, type: "string", index_analyzer: "index_ngram_analyzer", search_analyzer: "search_ngram_analyzer"
end
end
end
property :notes
end
Thingy.create id: 1, notes: 'Martial Partial Martian'
Thingy.create id: 2, notes: 'Venetian Completion Heresion'
Thingy.index.refresh
# Find 'art' in 'martial'
#
# Equivalent to: http://localhost:9200/thingies/_search?q=notes:art
#
results = Thingy.search do
query do
match :notes, 'art'
end
end
p results.map(&:notes)
# Find 'net' in 'venetian'
#
# Equivalent to: http://localhost:9200/thingies/_search?q=notes:net
#
results = Thingy.search do
query do
match :notes, 'net'
end
end
p results.map(&:notes)
The problem for me was that I was using the string query instead of the match query. The search should have been written like this:
#things = Thing.search page: params[:page], per_page: 50 do
query {
match [:prop_1, prop_2, :notes], query
}
sort {
by :id, 'desc'
}
filter :term, account_id: account_id
filter :term, test: false
size 50
highlight notes: {number_of_fragments: 0}, options: {tag: '<span class="match">'}
end

ElasticSearch/Tire: How to properly set partial word searches up

Even though I've seen many accounts of it mentioning this as relatively straightforward, I haven't managed to see it working properly. Let's say I have this:
class Car < ActiveRecord::Base
settings analysis: {
filter: {
ngram_filter: { type: "nGram", min_gram: 3, max_gram: 12 }
},
analyzer: {
partial_analyzer: {
type: "snowball",
tokenizer: "standard",
filter: ["standard", "lowercase", "ngram_filter"]
}
}
} do
mapping do
indexes :name, index_analyzer: "partial_analyzer"
end
end
end
And let's say I have a car named "Ford" and I update my index. Now, if I search for "Ford":
Car.tire.search { query { string "Ford" } }
My car is in my results. Now, If I look for "For":
Car.tire.search { query { string "For" } }
My car isn't found anymore. I thought the nGram filter would automatically take care of it for me, but apparently it isn't. As a temporary solution I'm using the wildcard (*) for such searches, but this is definitely not the best approach, being the min_gram and max_gram definitions key elements in my search. Can anyone tell me how they solved this?
I'm using Rails 3.2.12 with ruby 1.9.3 . ElasticSearch version is 0.20.5.
You want to use the custom analyzer instead of the snowball one: Elasticsearch custom analyzer
Basically the other analyzers come with a predefined set of filters and tokenizers.
You probably also want to use the Edge-Ngram filter: Edge-Ngram filter
The difference between Edge-NGram and NGram is basically Edge-Ngram basically only sticking to the "edges" of a term. So it starts at the front or at the back. Ford -> [For] instead of -> [For, ord]
Some more advanced links on the topic of autocompletion:
Autocompletion with fuzziness (pure elasticsearch, no tire, but very good read)
Another useful question with links provided
Edit
Basically I have a very similar setup to what you have. But with another analyzer for title and multi-field for both. And because of multi-language support here is an array of names instead of just a name.
I also specify the search_analyzer and I use string-keys instead of symbols. This is what I actually have:
settings "analysis" => {
"filter" => {
"name_ngrams" => {
"side" => "front",
"max_gram" => 20,
"min_gram" => 2,
"type" => "edgeNGram"
}
},
"analyzer" => {
"full_name" => {
"filter" => %w(standard lowercase asciifolding),
"type" => "custom",
"tokenizer" => "letter"
},
"partial_name" => {
"filter" => %w(standard lowercase asciifolding name_ngrams),
"type" => "custom",
"tokenizer" => "standard"
}
}
} do
mapping do
indexes :names do
mapping do
indexes :name, :type => 'multi_field',
:fields => {
"partial" => {
"search_analyzer" => "full_name",
"index_analyzer" => "partial_name",
"type" => "string"
},
"title" => {
"type" => "string",
"analyzer" => "full_name"
}
}
end
end
end
end

Resources