Unable to publish message with Faye Ruby client - ruby-on-rails

First off, I've already look at:
faye ruby client is not working
I tried the recommendations and its still not working for me.
Heres my code:
def broadcast(channel, data=nil, &block)
return if Rails.env.test?
if data.nil? && block_given?
data = capture(&block)
end
client = Faye::Client.new(APP_CONFIG['faye_url'])
client.publish(channel, data)
end
I tried using Net::HTTP.post_form and the server froze with no errors or warnings or anything. I've tried putting it into an EM.run block with no luck. I can publish to Faye with curl just fine and its sent on to subscribers but for some reason the ruby client just isn't working.
I'm using faye-rails, ruby 1.9.3 and rails 2.3.13.
The server is behind nginx, I tried both the ngnix ip/port and the thin ip/port. Still didn't work.
It works fine in development, just not in production.
Update:
I disabled both WebSockets and EventSource to force it to use long polling so it would work through ngnix without any errors.
It is also running as rack middleware so it shouldn't need any additional ports.

Related

Errno::ECONNRESET: Connection reset by peer in Rails using rest-client

We have a Ruby on Rails application and this has a "search" functionality (search for some company). From browser user key-in some name and hit search and this search make an rest api call to outside system and get us some search results.
We are using "rest-client" (for Ruby on Rails).
I noticed this seems to work for few hours and suddenly my search seems to be broken all of a sudden and I can see in my log I get:
Errno::ECONNRESET: Connection reset by peer
We tried to investigate this issue by looking in to logs and we dont see any logs.
If we need to make this search work again we need to restart the passenger and then it works immediately. This is happening only in production environment. I tested in staging it seems to work well.
Questions:
What could be causing this "reset issue"
Why on my prod passenger reset it starts to work again.
We use reset-client should be write a code to manually close connection when this exception happens.
Any issue in firewall could causing this?
Is there any code I can place in the exception to restart this connection so the next call is success.
Code:
def call
resp_data = RestClient.get(#request_url, #header)
rescue => error
puts 'Exception: ' error.message
end
Try to the following
resp_data = RestClient::Request.new(
method: :get,
url: #request_url, #=> https://api.example.com/auth2/endpoint
:headers => {
:Authorization => #header, #=> "Bearer access_token",
}
)
rescue => error
puts 'Exception: ' error.message
It's very wired. I met the same problem.
my script is shown as below: ( works great in my local machine, works greate in remote server, until someday the disk was full and this scripts dead, saying: Errno::ECONNRESET: Connection reset by peer)
ENV['RAILS_ENV'] = ARGV.first || ENV['RAILS_ENV'] || 'production'
require File.expand_path(File.dirname(__FILE__) + "/../config/environment")
require 'rails'
require 'rubygems'
require 'rest-client'
url = 'https://api-cn.faceplusplus.com/cardpp/v1/ocridcard'
id_card = File.expand_path(File.dirname(__FILE__) + "/id_card.jpg")
puts "== id_card: #{id_card}"
response = RestClient.post url,
{
api_key: 'Td_XijYBZUMp',
api_secret: 'ihehEuWYwOWM',
image_file: File.open(id_card, 'rb')
}
puts "==response: "
puts response.inspect
my environment: ruby 2.5.0 , ubuntu server 16, with 8 core CPU, 50G ram
this problem happens when I found my hard disk was used 100%. no space left.
However, once I freed enough disk space, this problem exists.
after I restart my server, this problem exists.
when I run this script under rails, this problem exists.
However, when I run this script stand alone, it works fine.
so , finally, I turned to "curl" command. works great!
the working CURL script looks like:
$ curl -X POST "https://api-cn.faceplusplus.com/cardpp/v1/ocridcard" \
-F "api_key=Td_XijYBCOYh-Rf_kCMj" \
-F "api_secret=iheWYoQcbCPM9n2VS" \
-F "image_file=#scripts/id_card.jpg
I'm late on this, but in my case the problem is that I was using AWS Redis for ElastiCache and in there, I had a cluster with a primary endpoint and read-only endpoint.
This error message can show up if the application is having problems connecting to redis!
I was using the read-only endpoint instead of the primary one, and the primary is used to write data to redis.
Taking a close look at my endpoint, it was something like application-name.qwerty-ro.cache.amazonaws.com:6379 and I changed it for application-name.qwerty.cache.amazonaws.com:6379 without the -ro part which is what made it read-only.
I lost about 6 hours trying to figure it out, so hope it helps someone else!

Can not open localhost by running Ruby in Rails application

I am trying to run a ruby script in rails application using system call, like
def runSystemCall
system("ruby /path/to/ruby/script/watir.rb localhost:3000/articles/14")
end
and watir.rb:
def watir(url)
bb = Watir::Browser.new :chrome
bb.goto "#{url[0]}"
end
watir(ARGV)
when running this in rails application, a browser opened and stay in a status of Waiting for localhost..., until error
.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:158:in `rescue in rbuf_fill': Net::ReadTimeout (Net::ReadTimeout)
loaded the website localhost:3000/articles/14 opened.
Dose anyone know why?
and, when run this in a terminal
$ ruby /path/to/ruby/script/watir.rb localhost:3000/articles/14
a browser opened and open website localhost:3000/articles/14
That's what I expected.
Your local server is single threaded. That means it can only handle one request at a time. If one local request wants to load an other page from the same local server, than you need at least two local threads.

Rails send_file not working with files larger than 400mb

I used Helicon Zoo to set up a rails application on a Windows Server 2008 machine.
My problem is downloading files above 400MB.
In my rails app I use the following to send files to a client:
app/controllers/hosted_files_controller.rb
class HostedFilesController < ApplicationController
before_filter :authenticate_user!
around_filter :catch_not_foun
def download
#footprint = UserAbility.new(current_user).footprints.find(params[:id])
send_file path
end
private
def path
if #footprint.subpath?
#path = "#{HOSTED_FILES_PATH}\\#{#footprint.subpath}\\#{#footprint.filename}"
else
#path = "#{HOSTED_FILES_PATH}\\#{#footprint.filename}"
end
end
def catch_not_found
yield
rescue ActiveRecord::RecordNotFound
recover_and_log "We couldn't find that record.", "No record found using the id (#{params[:id]})"
rescue ActionController::MissingFile
recover_and_log "We couldn't find the file on our server.", "The file was not found at the following path: #{#path}"
end
def recover_and_log (displayed, logged)
logger.info "!!! Error: #{logged}"
redirect_to root_url, alert: displayed
end
end
I have config.action_dispatch.x_sendfile_header commented out in the production.rb file since I am not using Apache or Nginx.
This works great for all the files on the server that are below ~400MB. After I get above it, I get a 500 internal server error from Helicon Zoo that says the following:
Helicon Zoo module has caught up an error. Please see the details below.
Worker Status
The process was created
Windows error
The pipe has been ended. (ERROR CODE: 109)
Internal module error
message: ZooApplication backend read Error.
type: ZooException
file: Jobs\JobBase.cpp
line: 566
version: 3.1.98.508
STDERR
Empty stderr
Does anyone have any idea what is going on? I'm at a loss.
I've tried:
increasing the buffer_size on send_file (didn't work)
play around with memory settings in IIS for the application pool (didn't work)
change x_sendfile_header to X-Sendfile and X-Accel-Redirect (didn't work)
I'm considering trying to install Apache on the Windows Server and using the x_sendfile_header to offload sending the file to Apache, but I'm afraid of messing up the already (almost) working application.
Does anyone have any ideas of how to fix this?
By default with current version of Helicon Zoo Ruby applications are installed as FastCGI Ruby Rack connector. Since FastCGI protocol is a blocking protocol it may have some limitations on request timeout or request max size. If you need to send large files I suggest you to go "Ruby 1.9 Rack over HTTP with Thin" route instead. I suppose you've been following Ruby on Rails (2.3.x and 3.x.x) instruction. Now just follow additional steps from Ruby Rack over HTTP with Thin instruction, like running "gem install thin" command and editing web.config as follows:
In the < handlers> section comment out two lines that follows
< !-- Ruby 1.9 over FastCGI -->
Uncomment two lines that follows
< !-- Ruby 1.9 over HTTP, using Thin as a back-end application server -->
In the <environmentVariables> section uncomment line
< !-- Use this APP_WORKER with HTTP Ruby engine and Thin. Thin need to be installed.
< add name="APP_WORKER" value="GEM_HOME\bin\thin start" />
Another solution, since you already using Helicon products, would be installing Helicon Ape that provides support for X-Sendfile HTTP header (see documentation) header as in Apache and is free for several sites per server. This solution would be even better since low level WinHTTP code will be used to send data, which will decrease load to the server and improve response speed.

net/http rails issue works only in console

in ruby on rails console 'net/http' works, but in controller it doesn't and gives timeout error.
require 'net/http'
uri = URI('http://localhost:3000/api_json.json')
json = Net::HTTP.get(uri)
parsed_json = ActiveSupport::JSON.decode(json)
Most likely you're using default Webrick server, that serves one request a time. So, from console it works fine, but fails when you try to call it from controller (when the Webrick worker is already busy).
You can try to setup and run another server like unicorn or thin, or run two Webrick instances on different ports:
rails server
rails server -p 3001
and go to localhost:3001
#dimuch's solution might have solved your issue, but it might help someone facing similar situation. I will explain the issue, and the solution in detail (extension of #dimuch's solution).
Issue:
You might have a controller like some:"/test_controller/test_method", and you might want to call a method in a controller, like /api/v1/some_test_api, and facing error like Completed 500 Internal Server Error in 60004.4ms
[27580c5c46770812c550188346c2dd3e] [127.0.0.1] [/xauth_test/sanity_oauth_login]
Timeout::Error (Timeout::Error):
Solution:
As said by #dimuch, "
Most likely you're using default Webrick server, that serves one request a time.....". 1. You need to run the application on different ports, like
rails s -p 3000, and rails s -p 3001, then make the request from 3001.
If you face an issue like "A server is already running. Check /tmp/pids/server.pid. Exiting", then try running rails s -p 3001 -P PROCESS_ID.
2. Use other server's like Unicorn, or Puma.
Note: If you want it for just testing purpose in local, then I would suggest to go with the first solution, which is easy and simple. I am sorry for poor English, and I found most of solutions from other stack overflow pages, and websites, which I am attaching (links for refs) below, and sorry if I missed some one or some thing to refer. Hope this helps someone.
Refs:
For running multiple instances:
Running multiple instances of Rails Server
Similar errors and way they are handled:
Rails HTTParty Getting Timeout::Error
Faraday timeout error with omniauth (custom strategy)/doorkeeper
Strange Timeout::Error with render_to_string and HTTParty in Controller Action
Configuring Unicorn &Puma:
http://vladigleba.com/blog/2014/03/21/deploying-rails-apps-part-3-configuring-unicorn/
https://github.com/puma/puma

Cramp and heroku

I have been playing around with Cramp to do some real time pushing of information in an app. Its all working great locally but when I push off to heroku I seem to be having issues with the ports.
I have a socket set up in cramp which inherits from websocket
class LiveSocket < Cramp::Websocket
and I also have a cramp action called home which basically just renders some erb for the home page
class HomeAction < Cramp::Action
in my route file I set up the following and also a static file server
Rack::Builder.new do
puts "public file at #{File.join(File.dirname(__FILE__), '../public')}"
file_server = Rack::File.new(File.join(File.dirname(__FILE__), 'public'))
routes = HttpRouter.new do
add('/').to(HomeAction)
get('/LiveSocket').to(LiveSocket)
end
run Rack::Cascade.new([file_server, routes])
end
Then on the client end the javascript connects to
var ws = new WebSocket("ws://<%= request.host_with_port %>/LiveSocket");
As I say locally it all works. We connect and start receiving notifications from the server. On heroku we run thin on the Cedar stack and have a profile which looks like
web: bundle exec thin --timeout 0 start -p $PORT
When I load up the site the page itself loads fine but on trying to connect the websocket I get an error which says
servername.herokuapp.com Unexpected response code: 200
I am guessing this has something to do with how heroku routes its requests but I do know that you can run a node.js websocket server on heroku so figure there must be a way to get this working too.
Thanks in advance for any help.
cheers
stuart
I don't think Heroku supports websockets :( http://devcenter.heroku.com/articles/http-routing#the_herokuappcom_http_stack

Resources