Errno::ECONNRESET: Connection reset by peer in Rails using rest-client - ruby-on-rails

We have a Ruby on Rails application and this has a "search" functionality (search for some company). From browser user key-in some name and hit search and this search make an rest api call to outside system and get us some search results.
We are using "rest-client" (for Ruby on Rails).
I noticed this seems to work for few hours and suddenly my search seems to be broken all of a sudden and I can see in my log I get:
Errno::ECONNRESET: Connection reset by peer
We tried to investigate this issue by looking in to logs and we dont see any logs.
If we need to make this search work again we need to restart the passenger and then it works immediately. This is happening only in production environment. I tested in staging it seems to work well.
Questions:
What could be causing this "reset issue"
Why on my prod passenger reset it starts to work again.
We use reset-client should be write a code to manually close connection when this exception happens.
Any issue in firewall could causing this?
Is there any code I can place in the exception to restart this connection so the next call is success.
Code:
def call
resp_data = RestClient.get(#request_url, #header)
rescue => error
puts 'Exception: ' error.message
end

Try to the following
resp_data = RestClient::Request.new(
method: :get,
url: #request_url, #=> https://api.example.com/auth2/endpoint
:headers => {
:Authorization => #header, #=> "Bearer access_token",
}
)
rescue => error
puts 'Exception: ' error.message

It's very wired. I met the same problem.
my script is shown as below: ( works great in my local machine, works greate in remote server, until someday the disk was full and this scripts dead, saying: Errno::ECONNRESET: Connection reset by peer)
ENV['RAILS_ENV'] = ARGV.first || ENV['RAILS_ENV'] || 'production'
require File.expand_path(File.dirname(__FILE__) + "/../config/environment")
require 'rails'
require 'rubygems'
require 'rest-client'
url = 'https://api-cn.faceplusplus.com/cardpp/v1/ocridcard'
id_card = File.expand_path(File.dirname(__FILE__) + "/id_card.jpg")
puts "== id_card: #{id_card}"
response = RestClient.post url,
{
api_key: 'Td_XijYBZUMp',
api_secret: 'ihehEuWYwOWM',
image_file: File.open(id_card, 'rb')
}
puts "==response: "
puts response.inspect
my environment: ruby 2.5.0 , ubuntu server 16, with 8 core CPU, 50G ram
this problem happens when I found my hard disk was used 100%. no space left.
However, once I freed enough disk space, this problem exists.
after I restart my server, this problem exists.
when I run this script under rails, this problem exists.
However, when I run this script stand alone, it works fine.
so , finally, I turned to "curl" command. works great!
the working CURL script looks like:
$ curl -X POST "https://api-cn.faceplusplus.com/cardpp/v1/ocridcard" \
-F "api_key=Td_XijYBCOYh-Rf_kCMj" \
-F "api_secret=iheWYoQcbCPM9n2VS" \
-F "image_file=#scripts/id_card.jpg

I'm late on this, but in my case the problem is that I was using AWS Redis for ElastiCache and in there, I had a cluster with a primary endpoint and read-only endpoint.
This error message can show up if the application is having problems connecting to redis!
I was using the read-only endpoint instead of the primary one, and the primary is used to write data to redis.
Taking a close look at my endpoint, it was something like application-name.qwerty-ro.cache.amazonaws.com:6379 and I changed it for application-name.qwerty.cache.amazonaws.com:6379 without the -ro part which is what made it read-only.
I lost about 6 hours trying to figure it out, so hope it helps someone else!

Related

Rails not logging requests

I'm utterly stumped as to why I'm not able to see the Rails controller outputs in my development log. I've spent days beating my head against a wall trying to figure this out and I'm not sure what else to try.
Setup: Rails 5.2.3 app running ruby 2.6.3 via docker-compose.
It started with me not being able to see my app logs when running docker logs <container-name>. However, I soon realized that I was able to see the output from puma starting and a shell script that ran rake tasks that the issue might be with rails.
To help assist with finding the issue:
Tore down and rebuilt the docker environment, several times
Stopped writing via STDOUT in favor of logs/development.log
Disabled lograge and elastic-apm, just in case
Reverted my development.rb config back to what's generated with a rails new
Followed the suggestions here
However, when running the rails console via docker exec -it <container-name>:
Running Rails.logger.level returns 2 which is warn, despite the default logging level being dev
I'm able to see log output when running Rails.logger.warn 'foo'
After setting Rails.logger.level = 0 I'm able to see output when running Rails.logger.debug 'foo'
I tried setting the value explicitly as config.log_level = :debug in development.rb yet it still set itself to the warn level.
However, I'm still not able to see any logs when navigating the application. Any thoughts?
Ugh. I feel like the biggest schmuck but I've figured out the issue.
I went back though source-control to see what has changed recently. In addition to the elastic-apm gem, I also added the Unleash gem.
I went to check out it's configuration and it looks like following their recommenced configuration causes logging to break. The line that was specifically causing offense was in the unleash initializer setting config.logger = Rails.logger

Unable to publish message with Faye Ruby client

First off, I've already look at:
faye ruby client is not working
I tried the recommendations and its still not working for me.
Heres my code:
def broadcast(channel, data=nil, &block)
return if Rails.env.test?
if data.nil? && block_given?
data = capture(&block)
end
client = Faye::Client.new(APP_CONFIG['faye_url'])
client.publish(channel, data)
end
I tried using Net::HTTP.post_form and the server froze with no errors or warnings or anything. I've tried putting it into an EM.run block with no luck. I can publish to Faye with curl just fine and its sent on to subscribers but for some reason the ruby client just isn't working.
I'm using faye-rails, ruby 1.9.3 and rails 2.3.13.
The server is behind nginx, I tried both the ngnix ip/port and the thin ip/port. Still didn't work.
It works fine in development, just not in production.
Update:
I disabled both WebSockets and EventSource to force it to use long polling so it would work through ngnix without any errors.
It is also running as rack middleware so it shouldn't need any additional ports.

start thinking sphinx on rails server startup

I have a chain of nginx + passenger for my rails app.
Now after each server restart i need to write in terminal in project folder
rake ts:start
but how can i automatize it?
So that after each server restart thinking sphinx is automatically started without my command in terminal?
I use rails 3.2.8 and ubuntu 12.04.
I can not imagine what can i try ever, please help me.
How can i do this, give some advices?
What I did to solve the same problem:
In config/application.rb, add:
module Rails
def self.rake?
!!#rake
end
def self.rake=(value)
#rake = !!value
end
end
In Rakefile, add this line:
Rails.rake = true
Finally, in config/initializers/start_thinking_sphinx.rb put:
unless Rails.rake?
begin
# Prope ts connection
ThinkingSphinx.search "test", :populate => true
rescue Mysql2::Error => err
puts ">>> ThinkingSphinx is unavailable. Trying to start .."
MyApp::Application.load_tasks
Rake::Task['ts:start'].invoke
end
end
(Replace MyApp above with your app's name)
Seems to work so far, but if I encounter any issues I'll post back here.
Obviously, the above doesn't take care of monitoring that the server stays up. You might want to do that separately. Or an alternative could be to manage the service with Upstart.
If you are using the excellent whenever gem to manage your crontab, you can just put
every :reboot do
rake "ts:start"
end
in your schedule.rb and it seems to work great. I just tested on an EC2 instance running Ubuntu 14.04.
There's two options I can think of.
You could look at how Ubuntu manages start-up scripts and add one for this (perhaps in /etc/init?).
You could set up monit or another monitoring tool and have it keep Sphinx running. Monit should boot automatically when your server restarts, and so it should ensure Sphinx (and anything else it's tracking) is running.
The catch with Monit and other such tools is that when you deliberately stop Sphinx (say, to update configuration structure and corresponding index changes), it might start it up again before it's appropriate. So I think you should start with the first of these two options - I just don't know a great deal about the finer points of that approach.
I followed #pat's suggestion and wrote a script to start ThinkingSphinx whenever the server boots up. You can see it as a gist -
https://gist.github.com/declan/4b7cc4fb4926df16f54c
We're using Capistrano for deployment to Ubuntu 14.04, and you may need to modify the path and user name to match your server setup. Otherwise, all you need to do is
Put this script into /etc/init.d/thinking_sphinx
Confirm that the script works: calling /etc/init.d/thinking_sphinx start on the command line should start ThinkingSphinx for your app, and /etc/init.d/thinking_sphinx stop should stop it
Tell Ubuntu to run this script automatically on startup: update-rc.d thinking_sphinx defaults
There's a good post on debian-administration.org called making scripts run at boot time that has more details.

Rails application to interface with local machine running Ubuntu

What im trying to do:
service-hosted rails app ( heroku or something )
user logs into application and wants to " DO THINGS "
" DO THINGS " entails running commands to the local machine i have here in my apartment
I've SSHed into a server before ... but i think this would be best setup if the server initiates the connection
I'm FAIRLY running a permanent SSH isnt the best idea
I'm not 100% sure on the process .. i just need information transfer between my hosted application .. and my local machine.
ruby socket set of commands which could possibly work?
any particular gem that would handle this?
Thanks ahead of time!
so far it looks like NetSSH is the answer im looking for
at command prompt:
$ gem install net-ssh
Next we create a new controller file:
app/controllers/ssh_connections_controller.rb
and inside ssh_connections_controller.rb file place:
def conn
Net::SSH.start( '127.0.0.1','wonton' ) do |session|
session.open_channel do |channel|
channel.on_close do |ch|
puts "channel closed successfully."
render :text => 'hits'
end
puts "closing channel..."
channel.close
end
session.loop
end
end
... and substitute your local settings...
'wonton' would be the name of whatever user you want to SSH in as
more to be updated!

rails Rake and mysql ssh port forwarding

I need to create a rake task to do some active record operations via a ssh tunnel.
The rake task is run on a remote windows machine so I would like to keep things in ruby. This is my latest attempt.
desc "Syncronizes the tablets DB with the Server"
task(:sync => :environment) do
require 'rubygems'
require 'net/ssh'
begin
Thread.abort_on_exception = true
tunnel_thread = Thread.new do
Thread.current[:ready] = false
hostname = 'host'
username = 'tunneluser'
Net::SSH.start(hostname, username) do|ssh|
ssh.forward.local(3333, "mysqlhost.com", 3306)
Thread.current[:ready] = true
puts "ready thread"
ssh.loop(0) { true }
end
end
until tunnel_thread[:ready] == true do
end
puts "tunnel ready"
Importer.sync
rescue StandardError => e
puts "The Database Sync Failed."
end
end
The task seems to hang at "tunnel ready" and never attempts the sync.
I have had success when running first a rake task to create the tunnel and then running the rake sync in a different terminal. I want to combine these however so that if there is an error with the tunnel it will not attempt the sync.
This is my first time using ruby Threads and Net::SSH forwarding so I am not sure what is the issue here.
Any Ideas!?
Thanks
The issue is very likely the same as here:
Cannot connect to remote db using ssh tunnel and activerecord
Don't use threads, you need to fork the importer off in another process for it to work, otherwise you will lock up with the ssh event loop.
Just running the code itself as a ruby script (with Importer.sync disabled) seems to work without any errors. This would suggest to me that the issue is with Import.sync. Would it be possible for you to paste the Import.sync code?
Just a guess, but could the issue here be that your :sync rake task has the rails environment as a prerequisite? Is there anything happening in your Importer class initialization that would rely on this SSH connection being available at load time in order for it to work correctly?
I wonder what would happen if instead of having environment be a prereq for this task, you tried...
...
Rake::Task["environment"].execute
Importer.sync
...

Resources