Streaming with ActionController::Live not working in production - ruby-on-rails

I'm working with Ruby on Rails 4.1, streaming data from a Sidekiq process using ActionController::Live. In development, my streaming is working fantastic. In production (using Nginx/Puma), it's not going quite as well. Here's what's happening.
In production, referring to the image below of my Firebug, the "/events" is being fired multiple times. Why would my EventSource be fired repeatedly as opposed to waiting for my data? This does not happen in development.
As long as my Sidekiq process is running, it will fire repeatedly at random intervals. As soon as my sidekiq process is complete, it will hang and not fire off any more. Then that last one will eventually time out (see red text in the image)
Here is my coffeescript:
source = new EventSource('/events')
source.addEventListener 'my_event', (e) ->
console.log 'Got a message!'
# Add the content to the screen somewhere
Referring to the Firebug image, it's almost like it times out during my Sidekiq process. I'm posting to redis from my worker.
Initializer
REDIS = Redis.new(url: 'redist://localhost:6379')
Sidekiq Worker
REDIS.publish('my_event', 'some data')
Then I have the controller action that the EventSource is hooked up to:
def events
response.headers["Content-Type"] = "text/event-stream"
redis = Redis.new(url: "redist://localhost:6379")
# blocks the current thread
redis.subscribe(['my_event', 'heartbeat']) do |on|
on.message do |event, data|
if event == 'heartbeat'
response.stream.write("event: heartbeat\ndata: heartbeat\n\n")
elsif event == 'my_event'
response.stream.write("event: #{event}\n")
response.stream.write("data: #{data.to_json}\n\n")
end
end
end
rescue IOError
logger.info 'Events stream closed'
ensure
logger.info 'Stopping events streaming thread'
redis.quit
response.stream.close
end
Again, this works 100% fantastic in development. My data from Sidekiq streams perfectly to the browser. The heartbeat stuff above is because of this answer (I had the same issue)
Does anyone see something I'm doing wrong? Am I missing some form of setup for ActionController::Live to work in production with Nginx/Puma?
IMPORTANT NOTE
At the VERY END of each of those requests from the image (where it seems to time out and attempt to do the next one), I get one line of data successfully. So something is working it seems. But the single EventSource is not staying open and listening for the data long enough, it just keep repeating.

I had a similar issue with a recent project. I found it was my nginx config.
Add the following to the location section:
proxy_set_header Connection '';
proxy_http_version 1.1;
chunked_transfer_encoding off;
proxy_buffering off;
proxy_cache off;
This information was obtained from the following Stack Overflow discussion:
EventSource / Server-Sent Events through Nginx
Hope this helps.

Related

Rails app hangs when there's a server sent event (SSE) that requires database actions

I'm learning & doing SSE for the first time in rails! My controller code:
def update
response.headers['Content-Type'] = 'text/event-stream'
sse = SSE.new(response.stream, event: 'notice')
begin
User.listen_to_creation do |user_id|
sse.write({id: user_id})
end
rescue ClientDisconnected
ensure
sse.close
end
end
Front end:
var source = new EventSource('/site_update');
source.addEventListener('notice', function(event) {
var data = JSON.parse(event.data)
console.log(data)
});
Model pub/sub
class User
after_commit :notify_creation, on: :create
def notify_creation
ActiveRecord::Base.connection_pool.with_connection do |connection|
self.class.execute_query(connection, ["NOTIFY user_created, '?'", id])
end
end
def self.listen_to_creation
ActiveRecord::Base.connection_pool.with_connection do |connection|
begin
execute_query(connection, ["LISTEN user_created"])
connection.raw_connection.wait_for_notify do |event, pid, id|
yield id
end
ensure
execute_query(connection, ["UNLISTEN user_created"])
end
end
end
def self.clean_sql(query)
sanitize_sql(query)
end
private
def self.execute_query(connection, query)
sql = self.clean_sql(query)
connection.execute(sql)
end
end
I've noticed that if I'm writing to SSE, something trivial like in a tutorial like... sse.write({time_now: Time.now}), everything works great. In command line, CTRL+C successfully shuts down the local server.
However, whenever I need to write something that requires some kind of database action, for example when I'm doing a postgres pub/sub as in this tutorial, then CTRL+C doesn't shut down the local server, it's just stuck and hangs and requires me to manually kill the PID.
On the actual spun up server, sometimes a page refresh will hang forever as well. Other times, it will throw a timeout error:
ActiveRecord::ConnectionTimeoutError (could not obtain a connection from the pool within 5.000 seconds (waited 5.001 seconds); all pooled connections were in use):
Unfortunately this issue persists in production as well, where i'm using Heroku. I just get lots of timeout errors. But I think I have Heroku properly configured, and also local settings... my understanding is I just need to have a sizable pool (I have 5) to pull connections from and allow multiple threads. Below you'll find some config code.
THERE ARE NO ENV VARIABLES, DEFAULTS USED!
# config/database.yml
default: &default
adapter: postgresql
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
timeout: 5000
development:
<<: *default
database: proper_development
# config/puma.rb
workers Integer(ENV['WEB_CONCURRENCY'] || 1)
threads_count = Integer(ENV['MAX_THREADS'] || 5)
threads threads_count, threads_count
preload_app!
rackup DefaultRackup
port ENV['PORT'] || 3000
environment ENV['RACK_ENV'] || 'development'
on_worker_boot do
# Worker specific setup for Rails 4.1+
# See: https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server#on-worker-boot
ActiveRecord::Base.establish_connection
end
If it's helpful here's the output when I run rails s
=> Booting Puma
=> Rails 5.0.2 application starting in development on http://localhost:3000
=> Run `rails server -h` for more startup options
Puma starting in single mode...
* Version 4.3.3 (ruby 2.4.0-p0), codename: Mysterious Traveller
* Min threads: 0, max threads: 16
* Environment: development
* Listening on tcp://127.0.0.1:3000
* Listening on tcp://[::1]:3000
Use Ctrl-C to stop
The issue here seems to be the lack of consistency between the puma threads and the database connections. If some connection was initiated by middleware etc through AR, the code you have written can lead to two connections being held in the same request cycle until you receive a notification and the thread finishes its job. AR caches connections per thread, so if a request was made and connection was checked out from the pool it will be held by that. Look at this issue for more details. If you end up using the connection pool to check out one more connection and make that connection wait till you get a notification from Postgres, potentially two connections can be held by the same Puma thread that is waiting.
To see this in action, start a new Rails server instance in development and make a request to your SSE endpoint. If you were getting timeouts before you might see two connections to Postgres while you made just one request to a newly launched server. So, even though your number of threads and connection pool size was same, you might run out of free connections from the pool. An easier way might be to just add this line in development after you checkout a connection to see how many cached connections are being held right now.
def self.listen_to_creation
ActiveRecord::Base.connection_pool.with_connection do |connection|
# Print or log this array
p ActiveRecord::Base.connection_pool.instance_variable_get(:#thread_cached_conns).keys.map(&:object_id)
begin
execute_query(connection, ["LISTEN user_created"])
.........
.........
Also, the snippets you have posted seem to indicate you are running up to 16 threads on a connection pool of size 5 in development environment, so that is a different issue.
To fix this, you would want to investigate which thread is holding the connection and if you can reuse it for your notification or just increase the DB pool size.
Now, coming to SSE itself here. Since a SSE connection blocks and reserves a thread in your current setup. If you have multiple requests to this endpoint you might quickly starve out of Puma threads itself making requests wait. This might work in case you are not expecting a lot of requests to this endpoint but if you are, you would need more free threads so you might even want to increase the Puma thread count. Ideally, though a non blocking server would work better here.
Edit:
Also, forgot to add that SSE in rails has an issue of keeping alive dead connections if it doesn't know the connection is dead. You might have threads endlessly waiting this way until some data comes and they realize the connection is no longer valid.

Redis + ActionController::Live threads not dying

Background: We've built a chat feature in to one of our existing Rails applications. We're using the new ActionController::Live module and running Puma (with Nginx in production), and subscribing to messages through Redis. We're using EventSource client side to establish the connection asynchronously.
Problem Summary: Threads are never dying when the connection is terminated.
For example, should the user navigate away, close the browser, or even go to a different page within the application, a new thread is spawned (as expected), but the old one continues to live.
The problem as I presently see it is that when any of these situations occur, the server has no way of knowing whether the connection on the browser's end is terminated, until something attempts to write to this broken stream, which would never happen once the browser has moved away from the original page.
This problem seems to be documented on github, and similar questions are asked on StackOverflow here (pretty well exact same question) and here (regarding getting number of active threads).
The only solution I've been able to come up with, based on these posts, is to implement a type of thread / connection poker. Attempting to write to a broken connection generates an IOError which I can catch and properly close the connection, allowing the thread to die. This is the controller code for that solution:
def events
response.headers["Content-Type"] = "text/event-stream"
stream_error = false; # used by flusher thread to determine when to stop
redis = Redis.new
# Subscribe to our events
redis.subscribe("message.create", "message.user_list_update") do |on|
on.message do |event, data| # when message is received, write to stream
response.stream.write("messageType: '#{event}', data: #{data}\n\n")
end
# This is the monitor / connection poker thread
# Periodically poke the connection by attempting to write to the stream
flusher_thread = Thread.new do
while !stream_error
$redis.publish "message.create", "flusher_test"
sleep 2.seconds
end
end
end
rescue IOError
logger.info "Stream closed"
stream_error = true;
ensure
logger.info "Events action is quitting redis and closing stream!"
redis.quit
response.stream.close
end
(Note: the events method seems to get blocked on the subscribe method invocation. Everything else (the streaming) works properly so I assume this is normal.)
(Other note: the flusher thread concept makes more sense as a single long-running background process, a bit like a garbage thread collector. The problem with my implementation above is that a new thread is spawned for each connection, which is pointless. Anyone attempting to implement this concept should do it more like a single process, not so much as I've outlined. I'll update this post when I successfully re-implement this as a single background process.)
The downside of this solution is that we've only delayed or lessened the problem, not completely solved it. We still have 2 threads per user, in addition to other requests such as ajax, which seems terrible from a scaling perspective; it seems completely unattainable and impractical for a larger system with many possible concurrent connections.
I feel like I am missing something vital; I find it somewhat difficult to believe that Rails has a feature that is so obviously broken without implementing a custom connection-checker like I have done.
Question: How do we allow the connections / threads to die without implementing something corny such as a 'connection poker', or garbage thread collector?
As always let me know if I've left anything out.
Update
Just to add a bit of extra info: Huetsch over at github posted this comment pointing out that SSE is based on TCP, which normally sends a FIN packet when the connection is closed, letting the other end (server in this case) know that its safe to close the connection. Huetsch points out that either the browser is not sending that packet (perhaps a bug in the EventSource library?), or Rails is not catching it or doing anything with it (definitely a bug in Rails, if that's the case). The search continues...
Another Update
Using Wireshark, I can indeed see FIN packets being sent. Admittedly, I am not very knowledgeable or experienced with protocol level stuff, however from what I can tell, I definitely detect a FIN packet being sent from the browser when I establish the SSE connection using EventSource from the browser, and NO packet sent if I remove that connection (meaning no SSE). Though I'm not terribly up on my TCP knowledge, this seems to indicate to me that the connection is indeed being properly terminated by the client; perhaps this indicates a bug in Puma or Rails.
Yet another update
#JamesBoutcher / boutcheratwest(github) pointed me to a discussion on the redis website regarding this issue, specifically in regards to the fact that the .(p)subscribe method never shuts down. The poster on that site pointed out the same thing that we've discovered here, that the Rails environment is never notified when the client-side connection is closed, and therefore is unable to execute the .(p)unsubscribe method. He inquires about a timeout for the .(p)subscribe method, which I think would work as well, though I'm not sure which method (the connection poker I've described above, or his timeout suggestion) would be a better solution. Ideally, for the connection poker solution, I'd like to find a way to determine whether the connection is closed on the other end without writing to the stream. As it is right now, as you can see, I have to implement client-side code to handle my "poking" message separately, which I believe is obtrusive and goofy as heck.
A solution I just did (borrowing a lot from #teeg) which seems to work okay (haven't failure tested it, tho)
config/initializers/redis.rb
$redis = Redis.new(:host => "xxxx.com", :port => 6379)
heartbeat_thread = Thread.new do
while true
$redis.publish("heartbeat","thump")
sleep 30.seconds
end
end
at_exit do
# not sure this is needed, but just in case
heartbeat_thread.kill
$redis.quit
end
And then in my controller:
def events
response.headers["Content-Type"] = "text/event-stream"
redis = Redis.new(:host => "xxxxxxx.com", :port => 6379)
logger.info "New stream starting, connecting to redis"
redis.subscribe(['parse.new','heartbeat']) do |on|
on.message do |event, data|
if event == 'parse.new'
response.stream.write("event: parse\ndata: #{data}\n\n")
elsif event == 'heartbeat'
response.stream.write("event: heartbeat\ndata: heartbeat\n\n")
end
end
end
rescue IOError
logger.info "Stream closed"
ensure
logger.info "Stopping stream thread"
redis.quit
response.stream.close
end
I'm currently making an app that revolves around ActionController:Live, EventSource and Puma and for those that are encountering problems closing streams and such, instead of rescuing an IOError, in Rails 4.2 you need to rescue ClientDisconnected. Example:
def stream
#Begin is not required
twitter_client = Twitter::Streaming::Client.new(config_params) do |obj|
# Do something
end
rescue ClientDisconnected
# Do something when disconnected
ensure
# Do something else to ensure the stream is closed
end
I found this handy tip from this forum post (all the way at the bottom): http://railscasts.com/episodes/401-actioncontroller-live?view=comments
Here's a potentially simpler solution which does not use a heartbeat. After much research and experimentation, here's the code I'm using with sinatra + sinatra sse gem (which should be easily adapted to Rails 4):
class EventServer < Sinatra::Base
include Sinatra::SSE
set :connections, []
.
.
.
get '/channel/:channel' do
.
.
.
sse_stream do |out|
settings.connections << out
out.callback {
puts 'Client disconnected from sse';
settings.connections.delete(out);
}
redis.subscribe(channel) do |on|
on.subscribe do |channel, subscriptions|
puts "Subscribed to redis ##{channel}\n"
end
on.message do |channel, message|
puts "Message from redis ##{channel}: #{message}\n"
message = JSON.parse(message)
.
.
.
if settings.connections.include?(out)
out.push(message)
else
puts 'closing orphaned redis connection'
redis.unsubscribe
end
end
end
end
end
The redis connection blocks on.message and only accepts (p)subscribe/(p)unsubscribe commands. Once you unsubscribe, the redis connection is no longer blocked and can be released by the web server object which was instantiated by the initial sse request. It automatically clears when you receive a message on redis and sse connection to the browser no longer exists in the collection array.
Building on #James Boutcher, I used the following in clustered Puma with 2 workers, so that I have only 1 thread created for the heartbeat in config/initializers/redis.rb:
config/puma.rb
on_worker_boot do |index|
puts "worker nb #{index.to_s} booting"
create_heartbeat if index.to_i==0
end
def create_heartbeat
puts "creating heartbeat"
$redis||=Redis.new
heartbeat = Thread.new do
ActiveRecord::Base.connection_pool.release_connection
begin
while true
hash={event: "heartbeat",data: "heartbeat"}
$redis.publish("heartbeat",hash.to_json)
sleep 20.seconds
end
ensure
#no db connection anyway
end
end
end
Here you are solution with timeout that will exit blocking Redis.(p)subscribe call and kill unused connection tread.
class Stream::FixedController < StreamController
def events
# Rails reserve a db connection from connection pool for
# each request, lets put it back into connection pool.
ActiveRecord::Base.clear_active_connections!
# Last time of any (except heartbeat) activity on stream
# it mean last time of any message was send from server to client
# or time of setting new connection
#last_active = Time.zone.now
# Redis (p)subscribe is blocking request so we need do some trick
# to prevent it freeze request forever.
redis.psubscribe("messages:*", 'heartbeat') do |on|
on.pmessage do |pattern, event, data|
# capture heartbeat from Redis pub/sub
if event == 'heartbeat'
# calculate idle time (in secounds) for this stream connection
idle_time = (Time.zone.now - #last_active).to_i
# Now we need to relase connection with Redis.(p)subscribe
# chanel to allow go of any Exception (like connection closed)
if idle_time > 4.minutes
# unsubscribe from Redis because of idle time was to long
# that's all - fix in (almost)one line :)
redis.punsubscribe
end
else
# save time of this (last) activity
#last_active = Time.zone.now
end
# write to stream - even heartbeat - it's sometimes chance to
# capture dissconection error before idle_time
response.stream.write("event: #{event}\ndata: #{data}\n\n")
end
end
# blicking end (no chance to get below this line without unsubscribe)
rescue IOError
Logs::Stream.info "Stream closed"
rescue ClientDisconnected
Logs::Stream.info "ClientDisconnected"
rescue ActionController::Live::ClientDisconnected
Logs::Stream.info "Live::ClientDisconnected"
ensure
Logs::Stream.info "Stream ensure close"
redis.quit
response.stream.close
end
end
You have to use reds.(p)unsubscribe to end this blocking call. No exception can break this.
My simple app with information about this fix: https://github.com/piotr-kedziak/redis-subscribe-stream-puma-fix
Instead of sending a heartbeat to all the clients, it might be easier to just set a watchdog for each connection. [Thanks to #NeilJewers]
class Stream::FixedController < StreamController
def events
# Rails reserve a db connection from connection pool for
# each request, lets put it back into connection pool.
ActiveRecord::Base.clear_active_connections!
redis = Redis.new
watchdog = Doberman::WatchDog.new(:timeout => 20.seconds)
watchdog.start
# Redis (p)subscribe is blocking request so we need do some trick
# to prevent it freeze request forever.
redis.psubscribe("messages:*") do |on|
on.pmessage do |pattern, event, data|
begin
# write to stream - even heartbeat - it's sometimes chance to
response.stream.write("event: #{event}\ndata: #{data}\n\n")
watchdog.ping
rescue Doberman::WatchDog::Timeout => e
raise ClientDisconnected if response.stream.closed?
watchdog.ping
end
end
end
rescue IOError
rescue ClientDisconnected
ensure
response.stream.close
redis.quit
watchdog.stop
end
end
If you can tolerate a small chance of missing a message you can use subscribe_with_timeout:
sse = SSE.new(response.stream)
sse.write("hi", event: "hello")
redis = Redis.new(reconnect_attempts: 0)
loop do
begin
redis.subscribe_with_timeout(5 * 60, 'mycoolchannel') do |on|
on.message do |channel, message|
sse.write(message, event: 'message_posted')
end
end
rescue Redis::TimeoutError
sse.write("ping", event: "ping")
end
end
This code subscribes to a Redis channel, waits for 5 minutes, then closes connection to Redis and subscribes again.

Can I use a Request / Reply - RPC pattern in Rails 3 with AMQP?

For reasons similar to the ones in this discussion, I'm experimenting with messaging in lieu of REST for a synchronous RPC call from one Rails 3 application to another. Both apps are running on thin.
The "server" application has a config/initializers/amqp.rb file based on the Request / Reply pattern in the rubyamqp.info documentation:
require "amqp"
EventMachine.next_tick do
connection = AMQP.connect ENV['CLOUDAMQP_URL'] || 'amqp://guest:guest#localhost'
channel = AMQP::Channel.new(connection)
requests_queue = channel.queue("amqpgem.examples.services.time", :exclusive => true, :auto_delete => true)
requests_queue.subscribe(:ack => true) do |metadata, payload|
puts "[requests] Got a request #{metadata.message_id}. Sending a reply..."
channel.default_exchange.publish(Time.now.to_s,
:routing_key => metadata.reply_to,
:correlation_id => metadata.message_id,
:mandatory => true)
metadata.ack
end
Signal.trap("INT") { connection.close { EventMachine.stop } }
end
In the 'client' application, I'd like to render the results of a synchronous call to the 'server' in a view. I realize this is a bit outside the comfort zone of an inherently asynchronous library like the amqp gem, but I'm wondering if there's a way to make it work. Here is my client config/initializers/amqp.rb:
require 'amqp'
EventMachine.next_tick do
AMQP.connection = AMQP.connect 'amqp://guest:guest#localhost'
Signal.trap("INT") { AMQP.connection.close { EventMachine.stop } }
end
Here is the controller:
require "amqp"
class WelcomeController < ApplicationController
def index
puts "[request] Sending a request..."
WelcomeController.channel.default_exchange.publish("get.time",
:routing_key => "amqpgem.examples.services.time",
:message_id => Kernel.rand(10101010).to_s,
:reply_to => WelcomeController.replies_queue.name)
WelcomeController.replies_queue.subscribe do |metadata, payload|
puts "[response] Response for #{metadata.correlation_id}: #{payload.inspect}"
#message = payload.inspect
end
end
def self.channel
#channel ||= AMQP::Channel.new(AMQP.connection)
end
def self.replies_queue
#replies_queue ||= channel.queue("reply", :exclusive => true, :auto_delete => true)
end
end
When I start both applications on different ports and visit the welcome#index view.
#message is nil in the view, since the result has not yet returned. The result arrives a few milliseconds after the view is rendered and is displayed on the console:
$ thin start
>> Using rack adapter
>> Thin web server (v1.5.0 codename Knife)
>> Maximum connections set to 1024
>> Listening on 0.0.0.0:3000, CTRL+C to stop
[request] Sending a request...
[response] Response for 3877031: "2012-11-27 22:04:28 -0600"
No surprise here: subscribe is clearly not meant for synchronous calls. What is surprising is that I can't find a synchronous alternative in the AMQP gem source code or in any documentation online. Is there an alternative to subscribe that will give me the RPC behavior I want? Given that there are other parts of the system in which I'd want to use legitimately asynchronous calls, the bunny gem didn't seem like the right tool for the job. Should I give it another look?
edit in response to Sam Stokes
Thanks to Sam for the pointer to throw :async / async.callback. I hadn't seen this technique before and this is exactly the kind of thing I was trying to learn with this experiment in the first place. send_response.finish is gone in Rails 3, but I was able to get his example to work for at least one request with a minor change:
render :text => #message
rendered_response = response.prepare!
Subsequent requests fail with !! Unexpected error while processing request: deadlock; recursive locking. This may have been what Sam was getting at with the comment about getting ActionController to allow concurrent requests, but the cited gist only works for Rails 2. Adding config.allow_concurrency = true in development.rb gets rid of this error in Rails 3, but leads to This queue already has default consumer. from AMQP.
I think this yak is sufficiently shaven. ;-)
While interesting, this is clearly overkill for simple RPC. Something like this Sinatra streaming example seems a more appropriate use case for client interaction with replies. Tenderlove also has a blog post about an upcoming way to stream events in Rails 4 that could work with AMQP.
As Sam points out in his discussion of the HTTP alternative, REST / HTTP makes perfect sense for the RPC portion of my system that involves two Rails apps. There are other parts of the system involving more classic asynchronous event publishing to Clojure apps. For these, the Rails app need only publish events in fire-and-forget fashion, so AMQP will work fine there using my original code without the reply queue.
You can get the behaviour you want - have the client make a simple HTTP request, to which your web app responds asynchronously - but you need more tricks. You need to use Thin's support for asynchronous responses:
require "amqp"
class WelcomeController < ApplicationController
def index
puts "[request] Sending a request..."
WelcomeController.channel.default_exchange.publish("get.time",
:routing_key => "amqpgem.examples.services.time",
:message_id => Kernel.rand(10101010).to_s,
:reply_to => WelcomeController.replies_queue.name)
WelcomeController.replies_queue.subscribe do |metadata, payload|
puts "[response] Response for #{metadata.correlation_id}: #{payload.inspect}"
#message = payload.inspect
# Trigger Rails response rendering now we have the message.
# Tested in Rails 2.3; may or may not work in Rails 3.x.
rendered_response = send_response.finish
# Pass the response to Thin and make it complete the request.
# env['async.callback'] expects a Rack-style response triple:
# [status, headers, body]
request.env['async.callback'].call(rendered_response)
end
# This unwinds the call stack, skipping the normal Rails response
# rendering, all the way back up to Thin, which catches it and
# interprets as "I'll give you the response later by calling
# env['async.callback']".
throw :async
end
def self.channel
#channel ||= AMQP::Channel.new(AMQP.connection)
end
def self.replies_queue
#replies_queue ||= channel.queue("reply", :exclusive => true, :auto_delete => true)
end
end
As far as the client is concerned, the result is indistinguishable from your web app blocking on a synchronous call before returning the response; but now your web app can process many such requests concurrently.
CAUTION!
Async Rails is an advanced technique; you need to know what you're doing. Some parts of Rails do not take kindly to having their call stack abruptly dismantled. The throw will bypass any Rack middlewares that don't know to catch and rethrow it (here is a rather old partial solution). ActiveSupport's development-mode class reloading will reload your app's classes after the throw, without waiting for the response, which can cause very confusing breakage if your callback refers to a class that has since been reloaded. You'll also need to ask ActionController nicely to allow concurrent requests.
Request/response
You're also going to need to match up requests and responses. As it stands, if Request 1 arrives, and then Request 2 arrives before Request 1 gets a response, then it's undefined which request would receive Response 1 (messages on a queue are distributed round-robin between the consumers subscribed to the queue).
You could do this by inspecting the correlation_id (which you'll have to explicitly set, by the way - RabbitMQ won't do it for you!) and re-enqueuing the message if it's not the response you were waiting for. My approach was to create a persistent Publisher object which would keep track of open requests, listen for all responses, and lookup the appropriate callback to invoke based on the correlation_id.
Alternative: just use HTTP
You're really solving two different (and tricky!) problems here: persuading Rails/thin to process requests asynchronously, and implementing request-response semantics on top of AMQP's publish-subscribe model. Given you said this is for calling between two Rails apps, why not just use HTTP, which already has the request-response semantics you need? That way you only have to solve the first problem. You can still get concurrent request processing if you use a non-blocking HTTP client library, such as em-http-request.

receive TCP/IP data on a rails application

I have a custom device with a TCP/IP stack implemented that's sending a byte each 5 seconds to a remote IP.
On that remote IP, I'm building a site with rails 3.1.3 that will have to receive, store and display the data sent by the custom device.
I was thinking on having a TCP Socket running in the background, something like this, but i don't have a clue on how to integrate this with a rails site. Where to place it, how to start it and how to propagate the data to the views.
Does anybody have a clue on how shall I proceed?
To solve this I created a raketask that starts a TCP Server that will handle messages.
Note: This code has more than a year so I'm not 100% sure how it behaves, but I think the core part is this:
#server = TCPServer.new(#host, port)
loop do
Thread.start(#server.accept) do |tcpSocket|
port, ip = Socket.unpack_sockaddr_in(tcpSocket.getpeername)
begin
loop do
line = tcpSocket.recv(100).strip # Read lines from the socket
handle_message line # method to handle messages from the socket
end
rescue SystemCallError
#close the sockets logic
end
end
end

ActiveResource EOFError on "slow" API

I'm seriously struggling to solve this one, any help would be appreciated!
I have two Rails apps, let's call them Client and Service, all very simple, normal REST interface - here's the basic scenario:
Client makes a POST /resources.json request to the Service
The Service runs a process which creates the resource and returns an ID to the Client
Again, all very simple, just that Service processing is very time-intensive and can take several minutes. If that happens, an EOFError is raised on the Client, exactly 60s after the request was made (no matter what the ActiveResource::Base.timeout is set to) while the service correctly processed the request and responds with 200/201. This is what we see in the logs (chronologically):
C 00:00:00: POST /resources.json
S 00:00:00: Received POST /resources.json => resources#create
C 00:01:00: EOFError: end of file reached
/usr/ruby1.8.7/lib/ruby/1.8/net/protocol.rb:135:in `sysread'
/usr/ruby1.8.7/lib/ruby/1.8/net/protocol.rb:135:in `rbuf_fill'
/usr/ruby1.8.7/lib/ruby/1.8/timeout.rb:62:in `timeout'
...
S 00:02:23: Response POST /resources.json, 201, after 143s
Obviously the service response never reached the client. I traced the error down to the socket level and recreated the scenario in a script, where I open a TCPSocket and try to retrieve data. Since I don't request anything, I shouldn't get anything back and my request should time out after 70 seconds (see full script at the bottom):
Timeout::timeout(70) { TCPSocket.open(domain, 80).sysread(16384) }
These were the results for a few domain:
www.amazon.com => Timeout after 70s
github.com => EOFError after 60s
www.nytimes.com => Timeout after 70s
www.mozilla.org => EOFError after 13s
www.googlelabs.com => Timeout after 70s
maps.google.com => Timeout after 70s
As you can see, some servers allowed us to "wait" for the full 70 seconds, while others terminated our connection, raising EOFErrors. When we did this test against our service, we (expectedly) got an EOFError after 60 seconds.
Does anyone know why this happens? Is there any way to prevent these or extend the server-side time-out? Since our service continues "working", even after the socket was closed, I assume it must be terminated on the proxy-level?
Every hint would be greatly appreciated!
PS: The full script:
require 'socket'
require 'benchmark'
require 'timeout'
def test_socket(domain)
puts "Connecting to #{domain}"
message = nil
time = Benchmark.realtime do
begin
Timeout::timeout(70) { TCPSocket.open(domain, 80).sysread(16384) }
message = "Successfully received data" # Should never happen
rescue => e
message = "Server terminated connection: #{e.class} #{e.message}"
rescue Timeout::Error
message = "Controlled client-side timeout"
end
end
puts " #{message} after #{time.round}s"
end
test_socket 'www.amazon.com'
test_socket 'github.com'
test_socket 'www.nytimes.com'
test_socket 'www.mozilla.org'
test_socket 'www.googlelabs.com'
test_socket 'maps.google.com'
I know this is nearly a year old, but in case anyone else finds this, I wanted to add a possible culprit.
Amazon's ELB will terminate idle connections at 60 seconds, so if you are using EC2 behind ELB, then ELB could be the server side problem.
the only "documentation" I could find here is https://forums.aws.amazon.com/thread.jspa?threadID=33427&start=50&tstart=50, but it's better than nothing
Each server decides when to close the connection. It depends on the server side software and its settings. You can't control that.

Resources