Rails 2.2.2 - ferret server keeps dying - ruby-on-rails

We've been using ferret in a rails 2.2.2 app for years, and it's been fine (with a few quirks).
Now, though, whenever I try to use it locally, it just dies. I can start it up, and then as soon as Rails (in my local server, or in the console) tries to do a ferret search, the ferret server disappears.
The only feedback I can get is from log/acts_as_ferret.log, which has this:
Will use remote index server which should be available at druby://localhost:9010
[user] DRb connection error: connection closed
[digilearning_support_file] DRb connection error: druby://localhost:9010 - #<Errno::ECONNREFUSED: Connection refused - connect(2)>
[asset] DRb connection error: druby://localhost:9010 - #<Errno::ECONNREFUSED: Connection refused - connect(2)>
[cmw_step_resource] DRb connection error: druby://localhost:9010 - #<Errno::ECONNREFUSED: Connection refused - connect(2)>
/home/max/work/charanga/elearn_container/elearn/vendor/plugins/acts_as_ferret/lib/acts_as_ferret.rb:354 ###index_for [DigilearningTitle(id: integer, name: string, description: text, created_at: datetime, updated_at: datetime, folder_name: string, user_id: integer, summary: text, disk_space_in_megabytes: float, top_level_live_content: boolean, live_content: boolean)]
/home/max/work/charanga/elearn_container/elearn/vendor/plugins/acts_as_ferret/lib/acts_as_ferret.rb:338 ###options: {:offset=>nil, :limit=>nil}
ar_options: {}
[digilearning_title] DRb connection error: druby://localhost:9010 - #<Errno::ECONNREFUSED: Connection refused - connect(2)>
[digilearning_title] /home/max/work/charanga/elearn_container/elearn/vendor/plugins/acts_as_ferret/lib/ferret_find_methods.rb:42 ### now retrieving records from AR: id_arrays = {}, options: {}
[digilearning_title] /home/max/work/charanga/elearn_container/elearn/vendor/plugins/acts_as_ferret/lib/ferret_find_methods.rb:43 ### total_ids in id_arrays = 0 (uniq version of this is 0
/home/max/work/charanga/elearn_container/elearn/vendor/plugins/acts_as_ferret/lib/acts_as_ferret.rb:440 ### find_options = {}
[digilearning_title] /home/max/work/charanga/elearn_container/elearn/vendor/plugins/acts_as_ferret/lib/ferret_find_methods.rb:45 ###0 results from AR
[digilearning_title] /home/max/work/charanga/elearn_container/elearn/vendor/plugins/acts_as_ferret/lib/ferret_find_methods.rb:69 ### total_hits = 0
/home/max/work/charanga/elearn_container/elearn/vendor/plugins/acts_as_ferret/lib/acts_as_ferret.rb:341 ###Query: "Music"
total hits: 0, results delivered: 0
After I start it, i can see with ps that it's running. But when I start up the rails console, ferret dies. I start ferret with script/ferret_server start, which in turn just requires a file:
#!/usr/bin/env ruby
begin
require File.join(File.dirname(__FILE__), '../vendor/plugins/acts_as_ferret/lib/server_manager')
rescue LoadError
puts "Got a LoadError"
# try the gem
require 'rubygems'
gem 'acts_as_ferret'
require 'server_manager'
end
It's not outputting anything so it's not getting an exception trying to require that file.
I can't understand what has suddenly made it fail like this, though I have been making some changes to what is indexed, for one of the models. I can revert these changes and the problem doesn't go away though.
I don't know how to debug this... can anyone help? thanks
EDIT: I tried deleting all the contents of my index/development folder, thinking there might be something weird in there, but it didn't help.

Related

Errno::ENOTTY Inappropriate ioctl for device when connecting to a remote server through Net::SSH on SuSe (with Ruby on Rails 5.2.4)

My Ruby on Rails application remotely starts some scripts on a distant SuSe server (SUSE Linux Enterprise Server 15 SP2). It relies on the net-ssh gem which is declared in the Gemfile: gem 'net-ssh'.
The script is triggerd remotely through the following block:
Net::SSH.start(remote_host, remote_user, password: remote_password) do |ssh|
feed_back = ssh.exec!("#{event.statement}")
end
This works as expected as long as long as the Rails server runs on Windows Server 2016, which is my DEV environment. But when I deploy to the Validation environment, which is SUSE Linux Enterprise Server 15 SP2, I get this error message:
Errno::ENOTTY in myController#myMethod
Inappropriate ioctl for device
On another hand, issuing the SSH request through the command line - from SUSE to SUSE - works as expected too. Reading around I did not find a relevant parameter for the Net::SSH module to solve this.
Your suggestions are welcome!
I finally found out that the message refers to the operating mode of SSH: it requires a sort of terminal emulation - so called pty - wrapped into a SSH chanel.
So I implemented it this way:
Net::SSH.start(remote_host, remote_user, password: remote_password) do |session|
session.open_channel do |channel|
channel.request_pty do |ch, success|
raise "Error requesting pty" unless success
puts "------------ pty successfully obtained"
end
channel.exec "#{#task.statement}" do |ch, success|
abort "could not execute command" unless success
channel.on_data do |ch, data|
puts "------------ got stdout: #{data}"
#task.update_attribute(:return_value, data)
end
channel.on_extended_data do |ch, type, data|
puts "------------ got stderr: #{data}"
end
channel.on_close do |ch|
puts "------------ channel is closing!"
end
end
end
### Wait until the session closes
session.loop
end
This solved my issue.
Note:
The answer proposed above was only a part of the solution. The same error occured again with this source code when deploying to the production server.
The issue appears to be the password to the SSH target: I retyped it by hand instead of doing the usual copy/paste from MS Excel, and the SSH connection is now successful!
As the error raised is not a simple "connection refused", I suspect that the password string had a specific character encoding, or an unexpected ending character.
As the first proposed solution provides a working example, I leave it there.

Sidekiq worker displaying 'ERROR: heartbeat: "\xE2" from ASCII-8BIT to UTF-8' every two seconds

I am a junior dev working on a Ruby on Rails App that uses React on the front end. Recently I started getting this error and have no idea what is causing it. Has anyone seen this before?
enter image description here
If the image doesn't show this is what the terminal is giving back:
2020-08-14T16:14:36.743Z 37931 TID-oxmbyiczg INFO: See LICENSE and the LGPL-3.0 for licensing details.
2020-08-14T16:14:36.743Z 37931 TID-oxmbyiczg INFO: Upgrade to Sidekiq Pro for more features and support: http://sidekiq.org
2020-08-14T16:14:36.743Z 37931 TID-oxmbyiczg INFO: Booting Sidekiq 5.0.5 with redis options {:id=>"Sidekiq-server-PID-37931", :url=>nil}
2020-08-14T16:14:36.748Z 37931 TID-oxmbyiczg INFO: Starting processing, hit Ctrl-C to stop
2020-08-14T16:14:36.757Z 37931 TID-oxmddbzao ERROR: heartbeat: "\xE2" from ASCII-8BIT to UTF-8
2020-08-14T16:14:41.773Z 37931 TID-oxmddbzao ERROR: heartbeat: "\xE2" from ASCII-8BIT to UTF-8
2020-08-14T16:14:46.776Z 37931 TID-oxmddbzao ERROR: heartbeat: "\xE2" from ASCII-8BIT to UTF-8
2020-08-14T16:14:51.783Z 37931 TID-oxmddbzao ERROR: heartbeat: "\xE2" from ASCII-8BIT to UTF-8```
A little background: processSets iterate over the workers and they in turn send a heartbeat to redis every five seconds to check uptime and connectivity with the redis instance.
First I would check the Encoding::default_external on server and client machines and ensure Encoding::UTF_8 is set for both. Secondly ensure config.encoding = "utf-8" is set in application.rb of where the worker is running.
But the above is all very general and does not explain the error you are seeing, that comes from this block:
def ❤(key, data)
begin
# logic
rescue => e
# ignore all redis/network issues
logger.error("heartbeat: #{e.message}")
end
end
The above is called from heartbeat which is called from start_heartbeat which is called in run. The key used for heartbeat comes from the identity method in util.rb (in the Sidekiq repo) which comes from hostname which is a method which calls Socket.gethostname which does what it says in getting the hostname of the server, which returns the hostname in the platform specific encoding.
So if we had to come this far, find the server name of the machine running the worker and spin up a console and check the encoding value of Socket.gethostname which should explain the error.

PG::UnableToSend: no connection to the server in Rails

I have a production server running ubuntu 14.04, Rails 4.2.0, postgresql 9.6.1 with gem pg 0.21.0/0.20.0. In last few days, there is constantly error with accessing to a table customer_input_datax_records in psql server.
D, [2017-07-20T18:08:39.166897 #1244] DEBUG -- : CustomerInputDatax::Record Load (0.1ms) SELECT "customer_input_datax_records".* FROM "customer_input_datax_records" WHERE ("customer_input_datax_records"."status" != $1) [["status", "email_sent"]]
E, [2017-07-20T18:08:39.166990 #1244] ERROR -- : PG::UnableToSend: no connection to the server
: SELECT "customer_input_datax_records".* FROM "customer_input_datax_records" WHERE ("customer_input_datax_records"."status" != $1)
The code which call to access the db server is with Rufus scheduler 3.4.2 loop:
s = Rufus::Scheduler.singleton
s.every '2m' do
new_signups = CustomerInputDatax::Record.where.not(:status => 'email_sent').all
.......
end
After restart the server, usually there is with first request (or a few). But after some time (ex, 1 or 2 hours), the issue starts to show up. But the app seems running fine (accessing records with read/write & creating new). There are some online posts about the error. However the problem seems not the one I am having. Before I re-install the psql server, I would like to get some ideas about what causes the no connection.
UPDATE: database.yml
production:
adapter: postgresql
encoding: unicode
host: localhost
database: wb_production
pool: 5
username: postgres
password: xxxxxxx
So, the error is "RAILS: PG::UnableToSend: no connection to the server".
That reminds me of Connection pool issue with ActiveRecord objects in rufus-scheduler
You could do
s = Rufus::Scheduler.singleton
s.every '2m' do
ActiveRecord::Base.connection_pool.with_connection do
new_signups = CustomerInputDatax::Record
.where.not(status: 'email_sent')
.all
# ...
end
end
digging
It would be great to know more about the problem.
I'd suggest trying this code:
s = Rufus::Scheduler.singleton
def s.on_error(job, error)
Rails.logger.error(
"err#{error.object_id} rufus-scheduler intercepted #{error.inspect}" +
" in job #{job.inspect}")
error.backtrace.each_with_index do |line, i|
Rails.logger.error(
"err#{error.object_id} #{i}: #{line}")
end
end
s.every '2m' do
new_signups = CustomerInputDatax::Record.where.not(:status => 'email_sent').all
# .......
end
As soon as the problem manifests itself, I'd look for the on_error full output in the Rails log.
This on_error comes from https://github.com/jmettraux/rufus-scheduler#rufusscheduleron_errorjob-error
As we discuss in the comments, the problem seems related to your rufus version.
I would suggest you to check out whenever gem and to invoke a rake task instead of calling directly the activerecord model.
It could be a good idea, however, to open an issue with the traceback of your error in the rufus-scheduler repo on github (just to let then know...)

Faraday::SSLError for Elasticsearch

Currently running into an issue where my background workers which are communicating with elasticsearch via elasticsearch-client are running into SSL errors inside Faraday.
The error is this:
SSL_connect returned=1 errno=0 state=SSLv3 read server hello A: sslv3 alert handshake failure
The configuration works fine some of the time (around ~50%) and it has never failed for me inside of a console sessions.
The trace of the command is this:
curl -X GET 'https://<host>/_alias/models_write?pretty
The client config is this
Thread.current[:chewy_client] ||= begin
client_configuration[:reload_on_failure] = true
client_configuration[:reload_connections] = 30
client_configuration[:sniffer_timeout] = 0.5
client_configuration[:transport_options] ||= {}
client_configuration[:transport_options][:ssl] = { :version => :TLSv1_2 }
client_configuration[:transport_options][:headers] = { content_type: 'application/json' }
client_configuration[:trace] = true
client_configuration[:logger] = Rails.logger
::Elasticsearch::Client.new(client_configuration) do |f|
f.request :aws_signers_v4,
credentials: AWS::Core::CredentialProviders::DefaultProvider.new,
service_name: 'es',
region: ENV['ES_REGION'] || 'us-west-2'
end
end
As you can see I explicitly set the ssl version to TSLv1_2, but still getting an SSLv3 error.
Thought maybe it was a race condition issue. So ran a script spawning about 10 processes with 50 threads each and calling the sidekiq perform method inside and still not able to reproduce.
I am using the managed AWS 2.3 Elasticsearch if that is at all relevant.
Any help or guidance in the right direction would be greatly appreciated, I would be happy to attach as much info as needed.
Figured it out. The problem was that the elasticsearch-ruby gem autoloads in an http adapter that it detects if one is not specified. The one used in my console was not the one getting auto loaded into sidekiq.
The sidekiq job was using the HTTPClient adapter which did not respect the SSL version option. Thus I was getting this error. After explicitly defining the faraday adapter it worked.

Why am I getting a connection refused error when using the Deadweight gem in rails?

My rake file for deadweight looks like
task :deadweight do
dw = Deadweight.new
dw.stylesheets =["/stylesheets/application.css"]
dw.pages = ["/"]
puts dw.run
end
dw.run is giving following error:
Connection refused - connect(2)
I'm not that familiar with deadweight, but it could be deadweight expects a running server at 0.0.0.0:3000.
From https://github.com/aanand/deadweight :
Running rake deadweight will output all unused rules, one per line.
Note that it looks at http://localhost:3000 by default, so you'll need
to have script/server (or whatever your server command looks like)
running.

Resources