ElasticSearch maxClauseCount is set to 1024 - ruby-on-rails

I'm using the elasticsearch-rails gem in Rails 5 and I am trying to increase the max_clause_count. I'm using version 7.3 of ElasticSearch.
In my elasticsearch.yml file I have added the following code:
indices.query.bool.max_clause_count: 4096
In my initializer, I load the configs like so:
config = {
host: "http://localhost:9200",
transport_options: {
request: { timeout: 200 }
}
}
if File.exists?("config/elasticsearch.yml")
config.merge!(YAML.load_file("config/elasticsearch.yml").symbolize_keys)
end
Elasticsearch::Model.client = Elasticsearch::Client.new(config)
I can still reproduce this error with one of my queries that is rather long and when I do, after loading the configs above I still get the error:
{"type":"too_many_clauses","reason":"maxClauseCount is set to 1024"}
printing the config to the terminal I get this output:
{:host=>"http://localhost:9200", :transport_options=>{:request=>{:timeout=>200}}, :"indices.query.bool.max_clause_count"=>4096}

I hate to ask this but just in case this is the issue, this is a static Lucene setting, it can only be set in the Elasticsearch config file and get picked up at startup. Did you restart your cluster after making the config change?

Related

Can we change connection dynamicly to Elasticsearch server on Rails App at run time

I have 2 elasticsearch servers. On my rails app, can I change connection to the Elasticsearch servers at run time?
For example,
- If user 1 log in the app, it should connect to elasticsearch server 1
- If user 2 log in the app, it should connect to elasticsearch server 2
Thanks
You can use randomize_hosts when creating connection
args = {
hosts: "https://host1.local:9091,https://host2.local:9091",
adapter: :httpclient,
logger: (Rails.env.development? || Rails.env.test?) ? Rails.logger : nil,
reload_on_failure: false,
randomize_hosts: true,
request_timeout: 5
}
client = Elasticsearch::Client.new(args)
Randomize hosts doc
Here you can read about a different host selection strategy than round robin. You could implement your own ideas.

Why is elasticserach-rails suddenly raising Faraday::ConnectionFailed (execution expired)?

I'm using Elasticsearch in a Rails app via the elasticsearch-model and elasticsearch-rails gems.
Everything was previously working fine, but after some updates I am now getting a Connection Failed error whenever I attempt to interact with the remote cluster (AWS Elasticsearch).
> MyModel.__elasticsearch__.create_index! force: true
=> Faraday::ConnectionFailed (execution expired)
I'm struggling to work out what is causing this connection error. After searching for similar issues, I've adjusted timeouts and tried various combinations of http, https and naked urls, but no success.
What is a sensible way to debug this connection error?
My Elasticsearch is initialized like this.
#initializers/elasticsearch.rb
require 'faraday_middleware'
require 'faraday_middleware/aws_sigv4'
credentials = Aws::Credentials.new(
ENV.fetch('AWS_ACCESS_KEY_ID'),
ENV.fetch('AWS_SECRET_ACCESS_KEY')
)
config = {
url: ENV.fetch('AWS_ELASTICSEARCH_URL'),
retry_on_failure: true,
transport_options: {
request: { timeout: 10 }
}
}
client = Elasticsearch::Client.new( config ) do |f|
f.request :aws_sigv4, credentials: credentials, service: 'es', region: ENV.fetch('AWS_ELASTICSEARCH_REGION')
end
Elasticsearch::Model.client = client
It turns out that there were two parts to this issue.
First, the Elasticsearch::Client, as configured above, was using the default ES port 9200. My ES is hosted on AWS, which appears to not expose this port.
After fixing this, I ran into the second issue (which I suspect is more specific to this app). I started getting a Faraday::ConnectionFailed (end of file) error. I don't know what caused this, but configuring the client with host and scheme fixed it.
My final config is as follows:
#initializers/elasticsearch.rb
# ...
config = {
host: ENV.fetch('AWS_ELASTICSEARCH_URL'),
port: 443,
scheme: "https",
retry_on_failure: true,
transport_options: {
request: { timeout: 10 }
}
}
client = Elasticsearch::Client.new( config ) do |f|
# ...
N.B. AWS_ELASTICSEARCH_URL must return a URL without protocol.
This is because of version issue.
Use this gem 'elasticsearch-model', '~> 5'

how to resolve influxdb.conf parse config error

while running influxdb:1.0 in docker i get this following error
[run] 2019/01/13 09:04:14 InfluxDB starting, version 1.0.2, branch master, commit ff307047057b7797418998a4ed709b0c0f346324
[run] 2019/01/13 09:04:14 Go version go1.6.2, GOMAXPROCS set to 1
[run] 2019/01/13 09:04:14 Using configuration at: /etc/influxdb/influxdb.conf
run: parse config: Near line 1 (last key parsed 'Merging'): Expected key separator '=', but got 'w' instead.
this is my first 3 lines from .conf file
Merging with configuration at: /etc/influxdb/influxdb.conf
reporting-disabled = false
bind-address = ":8088"
does anybody know how to resolve this?
thanks
According to the configuration file that you posted you have the following issue at the first line:
Merging with configuration at: /etc/influxdb/influxdb.conf
Is being treated as a normal config line so it should be in a form like this
Merging = with configuration at: /etc/influxdb/influxdb.conf
What you need to do is to comment the first line to avoid this issue.
The second note on your config file - it is not related to this issue in specific but you may get other errors - that the third line needs to be under [admin] so the final config should be like this:
# Merging with configuration at: /etc/influxdb/influxdb.conf
reporting-disabled = false
[admin]
bind-address = ":8088"
It would be better if you got the original config file and then start modifying it in order to follow the same standards without getting new issues related to the file format.

ElasticSearch 5.0 + Rails

Installed a fresh ElasticSearch 5.0 today and changed my Rails configuration to point to ES 5.
My elasticsearch.rb configuration file looks like:
require "faraday"
require "typhoeus/adapters/faraday"
config = {
host: "http://xxx.xxx.xxx.yyyy:9200/",
transport_options: {
request: { timeout: 5 }
},
}
if File.exists?("config/elasticsearch.yml")
config.merge!(YAML.load_file("config/elasticsearch.yml").symbolize_keys)
end
I have the following related gems installed on the application:
gem 'elasticsearch-model'
gem 'elasticsearch-rails'
gem 'elasticsearch-persistence', require: 'elasticsearch/persistence/model'
When I go to start my application, I receive the message:
[400] No handler found for uri [//****] and method [DELETE] (Elasticsearch::Transport::Transport::Errors::BadRequest)
Has anyone encountered this issue before?
I looked around for a bit and it looks like ElasticSearch 5.0 has a new API for deleting, but I'm not sure if this is the root cause of my issues:
https://www.elastic.co/guide/en/elasticsearch/reference/5.0/docs-delete-by-query.html
Thanks in advance!
According this discussion The problem is the proxy_options configuration. Just ignore the transport_options. If you change the configuration as following it should work.
config = {
hosts: default_host,
adapter: :typhoeus
}

Thinking Sphinx min_inflex_len and delta not working on production server

We have an issue with TS min_inflex_len and delta indexes on our production servers
I have everything working in development mode on OSX but when we deploy via capistrano to our Ubuntu server running passenger / apache, both delta indexing seems to stop as well as min_inflex_len
We're deploying as ubuntu user which also runs apache. We had an issue yesterday with production folder not being created but we manually created and I can see a list of the delta files in there now.
I've followed the docs through..
I can see the delta flag set to true on record creation but when searching it doesn't find the record. Once we rebuild index (as ubuntu user) I can find record, but only with full string.
My sphinx.conf file is as follows:
production:
enable_star: 1
min_infix_len: 3
bin_path: "/usr/local/bin"
version: 2.0.5
mem_limit: 128M
searchd_log_file: "/var/log/searchd.log"
development:
min_infix_len: 3
bin_path: "/usr/local/bin"
version: 2.0.5
mem_limit: 128M
Rebuild, start and conf work fine and my production.conf file contains this:
index company_core
{
source = company_core_0
path = /var/www/html/ordering-main/releases/20110831095808/db/sphinx/production/company_core
charset_type = utf-8
min_infix_len = 1
enable_star = 1
}
I also have this in my production.rb env file:
ThinkingSphinx.deltas_enabled = true
ThinkingSphinx.updates_enabled = true
My searchd.log file only has this in:
[Wed Aug 31 09:40:04.437 2011] [ 5485] accepting connections
Nothing at all appears in apache error / access log
-- EDIT ---
define_index do
indexes :name
has created_at, updated_at
set_property :delta => true
end
Not sure if it's the cause, but the version values in your sphinx.yml are for the version of Sphinx, not Thinking Sphinx - so you may want to run indexer to double-check what that value should be (likely one of 0.9.9, 1.10-beta or 2.0.1-beta).
Also: on the server, in script/console production, can you share the full output of the following (not interested in the value returned, hence why I'm forcing it to be an empty string - it'll just get in the way otherwise):
Company.define_indexes && Company.index_delta; ''
``
delta not working on production server for passenger user, you have to give the write permission to your passenger user when creating index and write it to db/sphinx/production folder.
Or you can set two line in your nginx/conf/nginx.conf
passenger_user_switching off;
passenger_default_user root;
Example:
passenger_root /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.0;
passenger_ruby /usr/local/bin/ruby;
passenger_user_switching off;
passenger_default_user root;

Resources