I keep getting a 503 error 30 seconds after a client sends a message through faye. After the 30 seconds the client then receives the message and it is appended to the chat but the error still occurs and the socket will eventually close. How can i modify my existing code to keep the websocket alive? And how can I get rid of the 30 second delay that heroku throws when a message is sent?
messages/add.js.erb
<% broadcast #path do %>
var $chat = $("#chat<%= #conversation.id %>");
$chat.append("<%= j render(#message) %>");
//Set the scroll bar to the bottom of the chat box
var messageBox = document.getElementById('chat<%= #conversation.id %>');
messageBox.scrollTop = messageBox.scrollHeight;
<% end %>
$("#convoId<%=#conversation.id%>")[0].reset();
application_helper.rb
def broadcast(channel, &block)
message = {:channel => channel, :data => capture(&block), :ext => {:auth_token => FAYE_TOKEN}}
uri = URI.parse(FAYE_END_PT)
Net::HTTP.post_form(uri, :message => message.to_json)
end
application.rb
config.middleware.delete Rack::Lock
config.middleware.use FayeRails::Middleware, mount: '/faye', :timeout => 25
faye.ru
require 'faye'
require File.expand_path('../faye_token.rb', __FILE__)
class ServerAuth
def incoming(message, callback)
if message['channel'] !~ %r{^/meta/}
if message['ext']['auth_token'] != FAYE_TOKEN
message['error'] = 'Invalid authentication token'
end
end
callback.call(message)
end
end
Faye::WebSocket.load_adapter('thin')
faye_server = Faye::RackAdapter.new(:mount => '/faye', :timeout => 45)
faye_server.add_extension(ServerAuth.new)
run faye_server
Procfile
web: bundle exec rails server -p $PORT
worker: bundle exec foreman start -f Procfile.workers
Procile.workers
faye_worker: rackup middlewares/faye.ru -s thin -E production
503 Error
/messages/add Failed to load resource: the server responded with a status of 503 (Service Unavailable)
I tried adding a worker to heroku along with a web dyno with no luck. Everything works fine on my local host when running heroku local. The process on the local host look like
forego | starting web.1 on port 5000
forego | starting worker.1 on port 5100
worker.1 | 20:33:18 faye_worker.1 | started with pid 16534
where as even with the web dyno and worker on heroku
=== web (1X): bundle exec rails server -p $PORT
web.1: up 2015/12/28 20:08:02 (~ 1h ago)
=== worker (1X): bundle exec foreman start -f Procfile.workers
worker.1: up 2015/12/28 21:18:39 (~ 40s ago)
A lot of this code was taken from various tutorials so hopefully if we can solve this issue it will make using Faye with Heroku easier to someone else as well. Thanks!
Heroku has a 30 seconds timeout for all the requests, after that raise an H12 error. https://devcenter.heroku.com/articles/limits#http-timeouts
If your request takes more that 30 seconds you should consider put it into a background job using Delayed_Job or Sidekiq for example.
Related
I have a rails 4.2.1 app running with Unicorn as app server.
I need to provide the user with the ability to download csv data.
I'm trying to stream the data, but when the file take too long time than Unicorn timeout and Unicorn will kill this process
Is there any way to solve this problem
My stream code :
private
def render_csv(data)
set_file_headers()
set_streaming_headers()
response.status = 200
self.response_body = csv_lines(data)
Rails.logger.debug("end")
end
def set_file_headers
file_name = "transactions.csv"
headers["Content-Type"] = "text/csv"
headers["Content-disposition"] = "attachment; filename=\"#{file_name}\""
end
def set_streaming_headers
#nginx doc: Setting this to "no" will allow unbuffered responses suitable for Comet and HTTP streaming applications
headers['X-Accel-Buffering'] = 'no'
headers["Cache-Control"] ||= "no-cache"
headers.delete("Content-Length")
end
def csv_lines(data)
Enumerator.new do |y|
#ideally you'd validate the params, skipping here for brevity
data.find_each(batch_size: 2000) do |row|
y << "jhjj"+ "\n"
end
end
end
If you use configuration file. Change timeout there. Here is how I do it.
In config/unicorn.rb
root = "/home/deployer/apps/appname/current"
working_directory root
pid "#{root}/tmp/pids/unicorn.pid"
stderr_path "#{root}/log/unicorn.log"
stdout_path "#{root}/log/unicorn.log"
listen "/tmp/unicorn.appname.sock"
worker_processes 2
timeout 60 #<<< if you need you can increase it.
Then you would start unicorn by
bundle exec unicorn -D -E production -c config/unicorn.rb
I am trying to run message queues on heroku. For this I am using RabbitMQ Bigwig plugin.
I am publishing messages using bunny gem and trying to receive messages with sneakers gem. This whole setup works smoothly on local machine.
I take following steps to setup queue
I run this rake on server to setup queue:
namespace :rabbitmq do
desc 'Setup routing'
task :setup_test_commands_queue do
require 'bunny'
conn = Bunny.new(ENV['SYNC_AMQP'], read_timeout: 10, heartbeat: 10)
conn.start
ch = conn.create_channel
# get or create exchange
x = ch.direct('testsync.pcc', :durable => true)
# get or create queue (note the durable setting)
queue = ch.queue('test.commands', :durable => true, :ack => true, :routing_key => 'test_cmd')
# bind queue to exchange
queue.bind(x, :routing_key => 'test_cmd')
conn.close
end
end
I am able to see this queue in rabbitmq management plugin with mentioned binding.
class TestPublisher
def self.publish(test)
x = channel.direct("testsync.pcc", :durable => true)
puts "publishing this = #{Test}"
x.publish(Test, :persistent => true, :routing_key => 'pcc_cmd')
end
def self.channel
#channel ||= connection.create_channel
end
def self.connection
#conn = Bunny.new(ENV['RABBITMQ_BIGWIG_TX_URL'], read_timeout: 10, heartbeat: 10) # getting configuration from rabbitmq.yml
#conn.start
end
end
I am calling TestPublisher.publish() to publish message.
I have sneaker worker like this:
require 'test_sync'
class TestsWorker
include Sneakers::Worker
from_queue "test.commands", env: nil
def work(raw_event)
puts "^"*100
puts raw_event
# o = CaseNote.create!(content: raw_event, creator_id: 1)
# puts "#########{o}"
test = Oj.load raw_event
test.execute
# event_params = JSON.parse(raw_event)
# SomeWiseService.build.call(event_params)
ack!
end
end
My Procfile
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
worker: bundle exec rake jobs:work
sneaker: WORKERS=TestsWorker bundle exec rake sneakers:run
My Rakefile
require File.expand_path('../config/application', __FILE__)
require 'rake/dsl_definition'
require 'rake'
require 'sneakers/tasks'
Test::Application.load_tasks
My sneaker configuration
require 'sneakers'
Sneakers.configure amqp: ENV['RABBITMQ_BIGWIG_RX_URL'],
log: "log/sneakers.log",
threads: 1,
workers: 1
puts "configuring sneaker"
I am sure that message gets published. I am able to get message on rabbitmq management plugin. But sneaker does not work. There is nothing in sneakers.log that can help.
sneakers.log on heroku :
# Logfile created on 2016-04-05 14:40:59 +0530 by logger.rb/41212
Sorry for this late response. I was able to get this working on heroku. When I faced this error after hours of debugging I was not able to fix it. So I rewrote all above code and I did not check what was wrong with my previous code.
The only problem with this code and correct code is queue binding.
I had two queues on same exchange. pcc.commands with routing key pcc_cmd and test.commands with routing key test_cmd.
I was working with test_cmd but as per following line in TestPublisher
x.publish(Test, :persistent => true, :routing_key => 'pcc_cmd')
I was publishing to different queue(pcc.commands). Hence I was not able to recieve the message on test.commands queue.
In TestWorker
from_queue "test.commands", env: nil
This states that fetch messages only from test.commands queue.
Regarding sneakers.log file:
Above setup was not able to give me logs in sneakers.log file. Yes this setup works on your local development machine, but it was not working on heroku. Now days to debug such issue I ommit log attribute from configuration. like this:
require 'sneakers'
Sneakers.configure amqp: ENV['RABBITMQ_BIGWIG_RX_URL'],
# log: "log/sneakers.log",
threads: 1,
workers: 1
This way you will get sneaker logs (even heartbeat logs) in heroku logs which can be seen by running command heroku logs -a app_name --tail.
I got 400 Bad Request if i request
http://localhost:3000/cont/act?params=%
On the log it only show
=> Booting Thin
=> Rails 3.2.14 application starting in development on http://0.0.0.0:3000
=> Call with -d to detach
=> Ctrl-C to shutdown server
[Zonebie] Setting timezone: ZONEBIE_TZ="Tokelau Is."
dalton
>> Thin web server (v1.5.1 codename Straight Razor)
>> Maximum connections set to 1024
>> Listening on 0.0.0.0:3000, CTRL+C to stop
!! Invalid request
How can i make custom message or redirect for this kind of error ?
P.S
I've try using few different middleware solution on google.
Like adding
class InvalidUriCatch
puts "Hello"
def initialize(app)
#app = app
end
def call(env)
puts "hello"
query = Rack::Utils.parse_nested_query(env['QUERY_STRING'].to_s) rescue :bad_query
if query == :bad_query
[302, {'Location' => 'google.com'}, "a"]
else
#app.call(env)
end
end
end
on my libs and put
config.middleware.insert_before ActionDispatch::Static, "InvalidUriCatch"
on application.rb
But still, doesn't work
it didn't even output hello on rails s log when i request it.
My code doesn't work because i'm using Thin on my development (My application using passenger on production. ).
That's why on production it show error with backtrace while on local it doesn't.
I am trying to run faye automatically using gem daemon_controller.
My Class
require "daemon_controller"
class FayeDaemon
def initialize
#controller = DaemonController.new(
:identifier => 'Faye server',
:start_command => "rackup faye.ru -s thin -E production",
:ping_command => [:tcp , 'localhost', 9292],
:log_file => 'log/faye.log',
:pid_file => 'tmp/pids/faye.pid',
:start_timeout => 5
)
end
def start
#controller.start
end
end
Function I use as before_filter in ApplicationController
def start_faye
fayes = FayeDaemon.new
fayes.start
end
as a result faye doesn't run with error
DaemonController::StartTimeout (Daemon 'Faye server' didn't daemonize in time.)
when fayes.start is called.
what i did wrong?
I highly recommend you to use foreman instead of deamon_controller, you can find good introduction here. Just install gem, create 'Procfile' in your rails root directory. and create two jobs, one for server and other for Faye, it could look like this:
web: bundle exec rails server webrick -b 127.0.0.1 -p 3000 -e development
faye: bundle exec rackup faye.ru -s thin -E production
and start foreman with
foreman start
I'm having trouble figuring out how to get God to restart resque.
I've got a Rails 3.2.2 stack on a Ubuntu 10.04.3 LTS Linode slice. Its running system Ruby 1.9.3-p194 (no RVM).
There's a God init.d service at /etc/init.d/god-service that contains:
CONF_DIR=/etc/god
GOD_BIN=/var/www/myapp.com/shared/bundle/ruby/1.9.1/bin/god
RUBY_BIN=/usr/local/bin/ruby
RETVAL=0
# Go no further if config directory is missing.
[ -d "$CONF_DIR" ] || exit 0
case "$1" in
start)
# Create pid directory
$RUBY_BIN $GOD_BIN -c $CONF_DIR/master.conf
RETVAL=$?
;;
stop)
$RUBY_BIN $GOD_BIN terminate
RETVAL=$?
;;
restart)
$RUBY_BIN $GOD_BIN terminate
$RUBY_BIN $GOD_BIN -c $CONF_DIR/master.conf
RETVAL=$?
;;
status)
$RUBY_BIN $GOD_BIN status
RETVAL=$?
;;
*)
echo "Usage: god {start|stop|restart|status}"
exit 1
;;
esac
exit $RETVAL
master.conf in the above contains:
load "/var/www/myapp.com/current/config/resque.god"
resque.god in the above contains:
APP_ROOT = "/var/www/myapp.com/current"
God.log_file = "/var/www/myapp.com/shared/log/god.log"
God.watch do |w|
w.name = 'resque'
w.interval = 30.seconds
w.dir = File.expand_path(File.join(File.dirname(__FILE__),'..'))
w.start = "RAILS_ENV=production bundle exec rake resque:work QUEUE=*"
w.uid = "deploy"
w.gid = "deploy"
w.start_grace = 10.seconds
w.log = File.expand_path(File.join(File.dirname(__FILE__), '..','log','resque-worker.log'))
# restart if memory gets too high
w.transition(:up, :restart) do |on|
on.condition(:memory_usage) do |c|
c.above = 200.megabytes
c.times = 2
end
end
# determine the state on startup
w.transition(:init, { true => :up, false => :start }) do |on|
on.condition(:process_running) do |c|
c.running = true
end
end
# determine when process has finished starting
w.transition([:start, :restart], :up) do |on|
on.condition(:process_running) do |c|
c.running = true
c.interval = 5.seconds
end
# failsafe
on.condition(:tries) do |c|
c.times = 5
c.transition = :start
c.interval = 5.seconds
end
end
# start if process is not running
w.transition(:up, :start) do |on|
on.condition(:process_running) do |c|
c.running = false
end
end
end
In deploy.rb I have a reload task:
task :reload_god_config do
run "god stop resque"
run "god load #{File.join(deploy_to, 'current', 'config', 'resque.god')}"
run "god start resque"
end
The problem is whether I deploy, or run god (stop|start|restart|status) resque manually, I get the error message:
The server is not available (or you do not have permissions to access it)
I tried installing god to system gems and pointing to it in god-service:
GOD_BIN=/usr/local/bin/god
but god start rescue gives the same error.
However, I can start the service by doing:
sudo /etc/init.d/god-service start
So its probably a permissions issue, I think, probably related to the fact that the init.d service is owned by root and god is run from the bundle by the deploy user.
What's the best way around this issue?
You're running the god service as a different user (there's a good chance root).
Also check out: God not running: The server is not available (or you do not have permissions to access it)
First you check whether you have installed god in your machine using "god --version" command. If its available just try to run some god script with -D option. Example "god -c sample.god -D" it will give you some type of error messages in the standard output in your console where is the exact issue. I was also getting same error when i try to run the commmand without "-D" option. Then when i tried with "-D" mode it was told some folder write permission issue i could able to find it and fix it.
Ok this is an issue with your config files, check them all of them + the includes somewhere it fails throwing you this error. I checked mine and fixes some errors afterwards it worked perfectly!