where to specify cluster details when using elastic search gem in Ruby - ruby-on-rails

I want to access the data in Elastic Search Cluster from my rails application. Lets say server is running at http://localhost:9200 and I want to access the end point http://localhost:9200/location/type.
The following this documentation and came across this example:
require 'elasticsearch'
client = Elasticsearch::Client.new log: true
client.cluster.health
client.index index: 'my-index', type: 'my-document', id: 1, body: { title: 'Test' }
client.indices.refresh index: 'my-index'
client.search index: 'my-index', body: { query: { match: { title: 'test' } } }
Questions:
where I will define the details of my elasticsearch cluster in the code? the cluster is running at http://localhost:9200/

As the documentation specifics, the elasticsearch gem wraps elasticsearch-transport for connecting to a cluster and elasticsearch-api for accessing the elasticsearch API. From the documentation of elasticsearch-transport,
In the simplest form, connect to Elasticsearch running on http://localhost:9200 without any configuration:
So basically, client = Elasticsearch::Client.new log: true will by default connect to the cluster running at localhost:9200 (the same machine as your Rails app).
Go ahead and try executing client.cluster.health after establishing the connection and you'll get to know if it succeeded or not.
Moreover, if your cluster runs on a different server, you can use the following to connect to it:
es = Elasticsearch::Client.new host: http(s)://<path-to-cluster-server>

Related

Deploying smart contract using truffle on private blockchain node on docker

I am facing problems deploying a smart contract on my private blockchain network. I created my blockchain network on three VMs (miners) using puppeth on a fourth VM (controller) by following the steps in this blog: https://medium.com/#collin.cusce/using-puppeth-to-manually-create-an-ethereum-proof-of-authority-clique-network-on-aws-ae0d7c906cce
Afterwards, I installed truffle on one of the miners VMs and i initialized truffle using the command:
truffle init
Then I wrote a simple hello world smart contract, compiled it and deployed it on truffle development blockchain and it worked. However, I tried to deploy it on my private blockchain but I can't connect to the network.
The admin.nodeInfo command in geth console returns the folowing output:
docker exec -it 954cd3955065 geth attach ipc:/root/.ethereum/geth.ipc
Welcome to the Geth JavaScript console!
instance: Geth/v1.9.25-unstable-ead81461-20201123/linux-amd64/go1.15.5
coinbase: 0xe8cc4bea2cfdfd14cddefe1141bedd109576b9a9
at block: 78558 (Tue Dec 01 2020 22:01:02 GMT+0000 (UTC))
datadir: /root/.ethereum
modules: admin:1.0 clique:1.0 debug:1.0 eth:1.0 miner:1.0 net:1.0 personal:1.0 rpc:1.0 txpool:1.0 web3:1.0
To exit, press ctrl-d
> admin.nodeInfo
{
enode: "enode://7206ca3c62f6db47e1230dcf14a765d4c9b4870a66470dbb21fcc5ed2fab2167d6bcc47eec8044c42037b3e6e0017aeb8ddfc3580471da54a6c7274a0c1fe46b#10.100.2.32:30303",
enr: "enr:-Je4QGXlVAESp8r2s1uHBJxoDLWQo8IvZsbe5sX2YRBb0un9Gdlt8nfDKQBR_j0lDPtaoCCuis4cJJlqtEHfa4tLO2EIg2V0aMfGhG5b-B6AgmlkgnY0gmlwhApkAiCJc2VjcDI1NmsxoQNyBso8YvbbR-EjDc8Up2XUybSHCmZHDbsh_MXtL6shZ4N0Y3CCdl-DdWRwgnZf",
id: "027a351994ac1b127df56180b6210310cc0164f17f1b12c167cb167c4ffaa122",
ip: "10.100.2.32",
listenAddr: "[::]:30303",
name: "Geth/v1.9.25-unstable-ead81461-20201123/linux-amd64/go1.15.5",
ports: {
discovery: 30303,
listener: 30303
},
protocols: {
eth: {
config: {
byzantiumBlock: 0,
chainId: 1515,
clique: {...},
constantinopleBlock: 0,
eip150Block: 0,
eip150Hash: "0x0000000000000000000000000000000000000000000000000000000000000000",
eip155Block: 0,
eip158Block: 0,
homesteadBlock: 0,
istanbulBlock: 0,
petersburgBlock: 0
},
difficulty: 98201,
genesis: "0x17f752387c901db617cf0594ecd2cb9811dfcd666318c2e0e7cb0239471da979",
head: "0xf8a37d0390558746901faa55463c127c553f02cf2d23ce0cb469fcd470c810f9",
network: 1515
}
}
}
I tried adding the network configuration in truffle-config.js like this:
devnet2: {
host: "localhost",
port: "30303", //port where the node is
network_id: "*",
from: 0x91cd7b879fefff34259d577a56d290b3315bf9b3 // Treats this network as if it was a public net. (default: false)
}
then, when deploying using the command truffle deploy --network devnet2 i always get this error:
Compiling your contracts...
===========================
> Everything is up to date, there is nothing to compile.
/usr/local/lib/node_modules/truffle/build/webpack:/packages/provider/index.js:56
throw new Error(errorMessage);
^
Error: There was a timeout while attempting to connect to the network.
Check to see that your provider is valid.
If you have a slow internet connection, try configuring a longer timeout in your Truffle config. Use the networks[networkName].networkCheckTimeout property to do this.
at Timeout.setTimeout (/usr/local/lib/node_modules/truffle/build/webpack:/packages/provider/index.js:56:1)
at ontimeout (timers.js:436:11)
at tryOnTimeout (timers.js:300:5)
at listOnTimeout (timers.js:263:5)
at Timer.processTimers (timers.js:223:10)
I tried extending the timeout limit but it didn't work. I also tried using Web3 Providers (HTTPProvider and IPCProvider) but without any luck (i can give more details, if needed).
Any help is well appreciated because i spent a lot of time on it without getting anywhere. Unfortunately, i couldn't find anything on deploying smart contracts to a node that is running on docker. If needed, i can gladly give more details about what i did.
I managed to run smart contracts on a private network, not using docker however. Some things come to mind. did you run a miner on your network? you will need to run a miner so that the contract gets migrated. Did you make sure that the gaslimit is met when running the contract? the miners will wait for the max gas limit to be reached before processing any request.
Did you already deploy the contract? in the migration scripts you either create a new migration script by bumping the version or use the reset flag to run all migration scripts again.

Rails CQL cannot connect to AWS Keyspaces (AWS Cassandra)

I am trying to connect from a Ruby on Rails application to AWS Keyspaces (AWS Cassandra), but I cannot manage to do it. I use the cequel gem and generated the config/cequel.yml which contains a similar thing to the following:
development:
host: "CONTACT_POINT"
username: "USER"
password: "PASS"
port: 9142
keyspace: key_development
max_retries: 3
retry_delay: 0.5
newrelic: true
ssl: true
server_cert: 'config/certs/AmazonRootCA1.pem'
replication:
class: NetworkTopologyStrategy
datacenter1: 3
datacenter2: 2
durable_writes: false
(Credentials where used in another app and they work which is working as expected.)
when I try to run:
rake cequel:keyspace:create
I get the following errors:
Cassandra::Errors::NoHostsAvailable: All attempted hosts failed: x.xxx.xxx.xxx (Cassandra::Errors::ServerError: Internal Server Error)
Set the dc to us-east-1 . drop the replication definition.

Can we change connection dynamicly to Elasticsearch server on Rails App at run time

I have 2 elasticsearch servers. On my rails app, can I change connection to the Elasticsearch servers at run time?
For example,
- If user 1 log in the app, it should connect to elasticsearch server 1
- If user 2 log in the app, it should connect to elasticsearch server 2
Thanks
You can use randomize_hosts when creating connection
args = {
hosts: "https://host1.local:9091,https://host2.local:9091",
adapter: :httpclient,
logger: (Rails.env.development? || Rails.env.test?) ? Rails.logger : nil,
reload_on_failure: false,
randomize_hosts: true,
request_timeout: 5
}
client = Elasticsearch::Client.new(args)
Randomize hosts doc
Here you can read about a different host selection strategy than round robin. You could implement your own ideas.

Why is elasticserach-rails suddenly raising Faraday::ConnectionFailed (execution expired)?

I'm using Elasticsearch in a Rails app via the elasticsearch-model and elasticsearch-rails gems.
Everything was previously working fine, but after some updates I am now getting a Connection Failed error whenever I attempt to interact with the remote cluster (AWS Elasticsearch).
> MyModel.__elasticsearch__.create_index! force: true
=> Faraday::ConnectionFailed (execution expired)
I'm struggling to work out what is causing this connection error. After searching for similar issues, I've adjusted timeouts and tried various combinations of http, https and naked urls, but no success.
What is a sensible way to debug this connection error?
My Elasticsearch is initialized like this.
#initializers/elasticsearch.rb
require 'faraday_middleware'
require 'faraday_middleware/aws_sigv4'
credentials = Aws::Credentials.new(
ENV.fetch('AWS_ACCESS_KEY_ID'),
ENV.fetch('AWS_SECRET_ACCESS_KEY')
)
config = {
url: ENV.fetch('AWS_ELASTICSEARCH_URL'),
retry_on_failure: true,
transport_options: {
request: { timeout: 10 }
}
}
client = Elasticsearch::Client.new( config ) do |f|
f.request :aws_sigv4, credentials: credentials, service: 'es', region: ENV.fetch('AWS_ELASTICSEARCH_REGION')
end
Elasticsearch::Model.client = client
It turns out that there were two parts to this issue.
First, the Elasticsearch::Client, as configured above, was using the default ES port 9200. My ES is hosted on AWS, which appears to not expose this port.
After fixing this, I ran into the second issue (which I suspect is more specific to this app). I started getting a Faraday::ConnectionFailed (end of file) error. I don't know what caused this, but configuring the client with host and scheme fixed it.
My final config is as follows:
#initializers/elasticsearch.rb
# ...
config = {
host: ENV.fetch('AWS_ELASTICSEARCH_URL'),
port: 443,
scheme: "https",
retry_on_failure: true,
transport_options: {
request: { timeout: 10 }
}
}
client = Elasticsearch::Client.new( config ) do |f|
# ...
N.B. AWS_ELASTICSEARCH_URL must return a URL without protocol.
This is because of version issue.
Use this gem 'elasticsearch-model', '~> 5'

AWS elasticsearch service throws Faraday::ConnectionFailed: Failed to open TCP connection to https:80 (getaddrinfo: Name or service not known)

I am currently trying to configure AWS’ elasticsearch service in my Rails v 5.1.4 application. I am using elasticsearch-rails 6.0.0. The issue I am currently getting I believe is with how my elasticsearch client is being set up in my initializer. One restriction I have is I can’t use the faraday_middleware-aws-signers-v4 gem to help communication between my AWS elastisearch instance and my app. I am attempting to do this with just aws-sdk-rails 1.0.1. Since this server is in the same security group as the elasticsearch instance I am assuming I don't need to pass in credentials.
Here is the my error:
Faraday::ConnectionFailed: Failed to open TCP connection to https:80 (getaddrinfo: Name or service not known)
from /usr/local/lib/ruby/2.4.0/net/http.rb:906:in `rescue in block in connect'`
Here is my initializers/elasticsearch.rb:
config = {
hosts: {host: 'https://search-epl-elasticsearch-xxxxxxxxxxxxxxxxxx.us-east-2.es.amazonaws.com', port: '80'},
transport_options: {
request: { timeout: 5 }
}
}
Elasticsearch::Model.client = Elasticsearch::Client.new(config)
I realise it's months later, but it appears you're configuring elasticsearch with an https url and telling it to use port 80 instead of 443. Try this instead:
config = {
url: 'https://search-epl-elasticsearch-xxxxxxxxxxxxxxxxxx.us-east-2.es.amazonaws.com',
transport_options: {
request: { timeout: 5 }
}
}
Elasticsearch::Model.client = Elasticsearch::Client.new(config)

Resources