Why is elasticserach-rails suddenly raising Faraday::ConnectionFailed (execution expired)? - ruby-on-rails

I'm using Elasticsearch in a Rails app via the elasticsearch-model and elasticsearch-rails gems.
Everything was previously working fine, but after some updates I am now getting a Connection Failed error whenever I attempt to interact with the remote cluster (AWS Elasticsearch).
> MyModel.__elasticsearch__.create_index! force: true
=> Faraday::ConnectionFailed (execution expired)
I'm struggling to work out what is causing this connection error. After searching for similar issues, I've adjusted timeouts and tried various combinations of http, https and naked urls, but no success.
What is a sensible way to debug this connection error?
My Elasticsearch is initialized like this.
#initializers/elasticsearch.rb
require 'faraday_middleware'
require 'faraday_middleware/aws_sigv4'
credentials = Aws::Credentials.new(
ENV.fetch('AWS_ACCESS_KEY_ID'),
ENV.fetch('AWS_SECRET_ACCESS_KEY')
)
config = {
url: ENV.fetch('AWS_ELASTICSEARCH_URL'),
retry_on_failure: true,
transport_options: {
request: { timeout: 10 }
}
}
client = Elasticsearch::Client.new( config ) do |f|
f.request :aws_sigv4, credentials: credentials, service: 'es', region: ENV.fetch('AWS_ELASTICSEARCH_REGION')
end
Elasticsearch::Model.client = client

It turns out that there were two parts to this issue.
First, the Elasticsearch::Client, as configured above, was using the default ES port 9200. My ES is hosted on AWS, which appears to not expose this port.
After fixing this, I ran into the second issue (which I suspect is more specific to this app). I started getting a Faraday::ConnectionFailed (end of file) error. I don't know what caused this, but configuring the client with host and scheme fixed it.
My final config is as follows:
#initializers/elasticsearch.rb
# ...
config = {
host: ENV.fetch('AWS_ELASTICSEARCH_URL'),
port: 443,
scheme: "https",
retry_on_failure: true,
transport_options: {
request: { timeout: 10 }
}
}
client = Elasticsearch::Client.new( config ) do |f|
# ...
N.B. AWS_ELASTICSEARCH_URL must return a URL without protocol.

This is because of version issue.
Use this gem 'elasticsearch-model', '~> 5'

Related

ElasticSearch maxClauseCount is set to 1024

I'm using the elasticsearch-rails gem in Rails 5 and I am trying to increase the max_clause_count. I'm using version 7.3 of ElasticSearch.
In my elasticsearch.yml file I have added the following code:
indices.query.bool.max_clause_count: 4096
In my initializer, I load the configs like so:
config = {
host: "http://localhost:9200",
transport_options: {
request: { timeout: 200 }
}
}
if File.exists?("config/elasticsearch.yml")
config.merge!(YAML.load_file("config/elasticsearch.yml").symbolize_keys)
end
Elasticsearch::Model.client = Elasticsearch::Client.new(config)
I can still reproduce this error with one of my queries that is rather long and when I do, after loading the configs above I still get the error:
{"type":"too_many_clauses","reason":"maxClauseCount is set to 1024"}
printing the config to the terminal I get this output:
{:host=>"http://localhost:9200", :transport_options=>{:request=>{:timeout=>200}}, :"indices.query.bool.max_clause_count"=>4096}
I hate to ask this but just in case this is the issue, this is a static Lucene setting, it can only be set in the Elasticsearch config file and get picked up at startup. Did you restart your cluster after making the config change?

AWS elasticsearch service throws Faraday::ConnectionFailed: Failed to open TCP connection to https:80 (getaddrinfo: Name or service not known)

I am currently trying to configure AWS’ elasticsearch service in my Rails v 5.1.4 application. I am using elasticsearch-rails 6.0.0. The issue I am currently getting I believe is with how my elasticsearch client is being set up in my initializer. One restriction I have is I can’t use the faraday_middleware-aws-signers-v4 gem to help communication between my AWS elastisearch instance and my app. I am attempting to do this with just aws-sdk-rails 1.0.1. Since this server is in the same security group as the elasticsearch instance I am assuming I don't need to pass in credentials.
Here is the my error:
Faraday::ConnectionFailed: Failed to open TCP connection to https:80 (getaddrinfo: Name or service not known)
from /usr/local/lib/ruby/2.4.0/net/http.rb:906:in `rescue in block in connect'`
Here is my initializers/elasticsearch.rb:
config = {
hosts: {host: 'https://search-epl-elasticsearch-xxxxxxxxxxxxxxxxxx.us-east-2.es.amazonaws.com', port: '80'},
transport_options: {
request: { timeout: 5 }
}
}
Elasticsearch::Model.client = Elasticsearch::Client.new(config)
I realise it's months later, but it appears you're configuring elasticsearch with an https url and telling it to use port 80 instead of 443. Try this instead:
config = {
url: 'https://search-epl-elasticsearch-xxxxxxxxxxxxxxxxxx.us-east-2.es.amazonaws.com',
transport_options: {
request: { timeout: 5 }
}
}
Elasticsearch::Model.client = Elasticsearch::Client.new(config)

Redis keeps calling out to localhost:6379 even though deployed to Heroku

I have a Rails app deployed to Heroku and I cant for the life of me figure out why it keeps wanting to deploy to the local. I do not even have localhost:6379 anywhere in my code for the front end (react native) or the back end which is my Rails API.
This is the error I get any time I have a new broadcast:
Completed 500 Internal Server Error in 111ms (ActiveRecord: 47.1ms)
Redis::CannotConnectError (Error connecting to Redis on localhost:6379 (Errno::ECONNREFUSED)):
Application.yaml:
`gmail_username: "<email_address>"
gmail_password: "<password>"
AWS_ACCESS_KEY: "<access_key>"
AWS_SECRET_KEY: "<secret_key>"
AWS_BUCKET: "<my_s3_app_bucket>"
REDIS_URL: "<redis_url>"`
Cable.yaml
production:
adapter: redis
url: <long_url_address>
host: <host_from_url>
port: <port_from_url>
password: <password_from_url>
Production.rb:
config.action_cable.allowed_request_origins = ["https://lynx-v1.herokuapp.com/"]
config.action_cable.url = "wss://lynx-v1.herokuapp.com/cable"
config.web_socket_server_url = "wss://lynx-v1.herokuapp.com/cable"
(i set both action cable and web socket just to test which worked, no matter which i go with i still get the error)
/config/initializers/redis.rb
require "redis"
uri = URI.parse(ENV["REDIS_URL"])
$redis = Redis.new(
:url => ENV["REDIS_URL"],
)
I dont know what is going on. Is it some kind of default that makes Redis look for local host 6379? I follow the steps step by step and I keep getting this error.
It started working again. There was an add-on gem in the rails app that was not fully configured. After finishing the setup for the side gem it started working again.

ElasticSearch 5.0 + Rails

Installed a fresh ElasticSearch 5.0 today and changed my Rails configuration to point to ES 5.
My elasticsearch.rb configuration file looks like:
require "faraday"
require "typhoeus/adapters/faraday"
config = {
host: "http://xxx.xxx.xxx.yyyy:9200/",
transport_options: {
request: { timeout: 5 }
},
}
if File.exists?("config/elasticsearch.yml")
config.merge!(YAML.load_file("config/elasticsearch.yml").symbolize_keys)
end
I have the following related gems installed on the application:
gem 'elasticsearch-model'
gem 'elasticsearch-rails'
gem 'elasticsearch-persistence', require: 'elasticsearch/persistence/model'
When I go to start my application, I receive the message:
[400] No handler found for uri [//****] and method [DELETE] (Elasticsearch::Transport::Transport::Errors::BadRequest)
Has anyone encountered this issue before?
I looked around for a bit and it looks like ElasticSearch 5.0 has a new API for deleting, but I'm not sure if this is the root cause of my issues:
https://www.elastic.co/guide/en/elasticsearch/reference/5.0/docs-delete-by-query.html
Thanks in advance!
According this discussion The problem is the proxy_options configuration. Just ignore the transport_options. If you change the configuration as following it should work.
config = {
hosts: default_host,
adapter: :typhoeus
}

where to specify cluster details when using elastic search gem in Ruby

I want to access the data in Elastic Search Cluster from my rails application. Lets say server is running at http://localhost:9200 and I want to access the end point http://localhost:9200/location/type.
The following this documentation and came across this example:
require 'elasticsearch'
client = Elasticsearch::Client.new log: true
client.cluster.health
client.index index: 'my-index', type: 'my-document', id: 1, body: { title: 'Test' }
client.indices.refresh index: 'my-index'
client.search index: 'my-index', body: { query: { match: { title: 'test' } } }
Questions:
where I will define the details of my elasticsearch cluster in the code? the cluster is running at http://localhost:9200/
As the documentation specifics, the elasticsearch gem wraps elasticsearch-transport for connecting to a cluster and elasticsearch-api for accessing the elasticsearch API. From the documentation of elasticsearch-transport,
In the simplest form, connect to Elasticsearch running on http://localhost:9200 without any configuration:
So basically, client = Elasticsearch::Client.new log: true will by default connect to the cluster running at localhost:9200 (the same machine as your Rails app).
Go ahead and try executing client.cluster.health after establishing the connection and you'll get to know if it succeeded or not.
Moreover, if your cluster runs on a different server, you can use the following to connect to it:
es = Elasticsearch::Client.new host: http(s)://<path-to-cluster-server>

Resources