HTTP streaming in rails not working when using Rack::Deflater - ruby-on-rails

I've setup unicorn in rails 3.1 and http streaming works until I enable Rack::Deflater.
I've tried both with and without use Rack::Chunked. In curl I can see my response while in chrome I get the following errror: ERR_INVALID_CHUNKED_ENCODING
The result is same in other browsers (firefox, safari) and between development (osx) and production (heroku).
config.ru:
require ::File.expand_path('../config/environment', __FILE__)
use Rack::Chunked
use Rack::Deflater
run Site::Application
unicorn.rb:
listen 3001, :tcp_nopush => false
worker_processes 1 # amount of unicorn workers to spin up
timeout 30 # restarts workers that hang for 30 seconds
controller:
render "someview", :stream => true
Thanks for any help.

The problem is that Rails ActionController::Streaming renders directly into a Chunked::Body. This means the content is first chunked and then gzipped by the Rack::Deflater middleware, instead of gzipped and then chunked.
According to the HTTP/1.1 RFC 6.2.1, chunked must be last applied encoding to a transfer.
Since "chunked" is the only transfer-coding required to be understood
by HTTP/1.1 recipients, it plays a crucial role in delimiting messages
on a persistent connection. Whenever a transfer-coding is applied to a
payload body in a request, the final transfer-coding applied must be
"chunked".
I fixed it for us by monkey patching ActionController::Streaming _process_options and _render_template methods in an initializer so it does not wrap the body in a Chunked::Body, and lets the Rack::Chunked middleware do it instead.
module GzipStreaming
def _process_options(options)
stream = options[:stream]
# delete the option to stop original implementation
options.delete(:stream)
super
if stream && env["HTTP_VERSION"] != "HTTP/1.0"
# Same as org implmenation except don't set the transfer-encoding header
# The Rack::Chunked middleware will handle it
headers["Cache-Control"] ||= "no-cache"
headers.delete('Content-Length')
options[:stream] = stream
end
end
def _render_template(options)
if options.delete(:stream)
# Just render, don't wrap in a Chunked::Body, let
# Rack::Chunked middleware handle it
view_renderer.render_body(view_context, options)
else
super
end
end
end
module ActionController
class Base
include GzipStreaming
end
end
And leave your config.ru as
require ::File.expand_path('../config/environment', __FILE__)
use Rack::Chunked
use Rack::Deflater
run Roam7::Application
Not a very nice solution, it will probably break some other middlewares that inspect/modify the body. If someone has a better solution I'd love to hear it.
If you are using new relic, its middleware must also be disabled when streaming.

Related

How to add Faraday request body into Datadog tracing in a Rails app

I'm trying to configure Faraday tracing in a Rails application using Datadog.
I've set up the Faraday -> Datadog connection:
require 'faraday'
require 'ddtrace'
Datadog.configure do |c|
c.use :faraday
end
Faraday.post("http://httpstat.us/200", {foo: 1, bar: 2}.to_json)
Faraday.get("http://httpstat.us/201?foo=1&bar=2")
It works well, the requests are being logged to Datadog.
But those logs do not contain any request parameters, nevertheless GET or POST.
Any adviсe on how to get the request params/body logged to Datadog?
So, by default the only things sent to Datadog from Faraday as part of the span in terms of the HTTP request are are:
span.set_tag(Datadog::Ext::HTTP::URL, env[:url].path)
span.set_tag(Datadog::Ext::HTTP::METHOD, env[:method].to_s.upcase)
span.set_tag(Datadog::Ext::NET::TARGET_HOST, env[:url].host)
span.set_tag(Datadog::Ext::NET::TARGET_PORT, env[:url].port)
Source: https://github.com/DataDog/dd-trace-rb/blob/e391d2eb64d3c6151a4bdd2710c6a8c7c1d57457/lib/ddtrace/contrib/faraday/middleware.rb#L54
The body of the request is not set in the http part of the span by default, only the URL, HTTP method, host and port are.
However, with manual instrumentation you can add anything you want to the span, so you could write an extension or monkey-patch to the Faraday middleware to add the body and parameters to the span:
span.set_tag("http.body", env[:body])
span.set_tag("http.params", env[:params])
An example monkey patch:
require 'faraday'
require 'ddtrace'
require 'ddtrace/contrib/faraday/middleware'
module FaradayExtension
private
def annotate!(span, env, options)
# monkey patch to add body to span
span.set_tag("http.body", env[:body]) unless env[:body].to_s.strip.empty?
span.set_tag("http.query", env[:url].query) if env[:url].query
super
end
end
Datadog::Contrib::Faraday::Middleware.prepend(FaradayExtension)
Datadog.configure do |c|
c.use :faraday
end
Faraday.post("http://httpstat.us/200", {foo: 1, bar: 2}.to_json)
Faraday.get("http://httpstat.us/201?foo=1&bar=2")
This worked for me in my testing:
NB: I am a Datadog employee, but not on the engineering team, just wanted to be transparent!

Having trouble with WebMock, not stubbing correctly

Ruby 1.9.3, RSpec 2.13.0, WebMock 1.17.4, Rails 3
I am writing tests for a company app. The controller in question displays a table of a customer's placed calls, and allows for sort/filter options.
EDIT The test fails because with my current setup, the path does not render, because the recorder_server is either not running locally, OR not setup correctly. Please help with this, too.
A Errno::ECONNREFUSED occurred in recordings#index:
Connection refused - connect(2)
/usr/local/lib/ruby/1.9.1/net/http.rb:763:in `initialize'
-------------------------------
Request:
-------------------------------
* URL : http://www.recorder.example.com:8080/recorded_calls
* IP address: 127.0.0.1
* Parameters: {"controller"=>"recordings", "action"=>"index"}
* Rails root: /var/www/rails/<repository>
As a call is placed, its data joins an xml file, created by an external API, called Recorder
The RecordingsController takes the xml file, and parses it into a hash.
When you visit the associated path, you see the results of the hash -- a table of placed calls, their attributes, and parameters for sort/filter.
Here is my spec so far.
require 'spec_helper'
include Helpers
feature 'Exercise recordings controller' do
include_context "shared admin context"
background do
canned_xml = File.open("spec/support/assets/canned_response.xml").read
stub_request(:post, "http://recorder.example.com:8080/recorder/index").
with(body: {"durations"=>["1"], "durations_greater_less"=>["gt"], "filter_from_day"=>"29", "filter_from_hour"=>"0", "filter_from_minute"=>"0", "filter_from_month"=>"12", "filter_from_year"=>"2014", "filter_prefix"=>true, "filter_to_day"=>"29", "filter_to_hour"=>"23", "filter_to_minute"=>"59", "filter_to_month"=>"12", "filter_to_year"=>"2014"}, # "shared_session_id"=>"19f9a08807cc70c1bf41885956695bde"},
headers: {'Accept'=>'*/*', 'Content-Type'=>'application/x-www-form-urlencoded', 'User-Agent'=>'Ruby'}).
to_return(status: 200, body: canned_xml, headers: {})
uri = URI.parse("http://recorder.example.com:8080/recorder/index")
visit recorded_calls_path
end
scenario 'show index page with 1 xml result' do
#page.save_and_open_page
expect(title).to eq("Recorded Calls")
end
end
And here is the RecordingsController
class RecordingsController < ApplicationController
# before_filter options
def index
test_session_id = request.session_options[:id]
#Make request to recording app for xml of files
uri = URI.parse("http://#{Rails.application.config.recorder_server}:#{Rails.application.config.recorder_server_port}/recorder/index")
http = Net::HTTP.new(uri.host, uri.port)
xml_request = Net::HTTP::Post.new(uri.request_uri)
xml_request_data = Hash.new
# sorting params
xml_request_data[:shared_session_id] = request.session_options[:id]
xml_request.set_form_data(xml_request_data)
response = http.request(xml_request)
if response.class == Net::HTTPOK
#recordings_xml = XmlSimple.xml_in(response.body)
#recordings_sorted = #recordings_xml["Recording"].sort { |a,b| Time.parse("#{a["date"]} #{a["time"]}") <=> Time.parse("#{b["date"]} #{b["time"]}") } unless #recordings_xml["Recording"].nil?
else #recordings_xml = Hash.new
end
end
# other defs
end
Any and all advice is much appreciated. Thank you.
How I configured WebMock
I am answering my own question, with the help of B-Seven and a string of comments. File by file, I will list the changes made in order to properly use WebMock.
Add WebMock to Gemfile under group :test, :development.
bundle install to resolve dependencies
my current setup included Ruby 1.9.3, Rails 2.13.0, WebMock 1.17.4
Setup spec_helper.rb to disable "Real HTTP connections". (This was a backtrace error received later on in this puzzling process.) This allows, to my understanding, all "real connections" to translate into localhost connections and work offline... Which is great since, ideally, I do not want the external app's server to run simultaneously.
require 'webmock/rspec'
WebMock.disable_net_connect!(allow_localhost: true)
In my test.rb environment file, the configurations for recorder_server and port were commented out... If left uncommented, the controller would raise an exception stating uninitialized constants. I used the test server/port (substituting the company name for example) as my layout for the spec stubbing.
In recordings_controller_spec.rb, I had already figured out how to make a canned XML response. With these changes above, my spec was able to correctly stub a response on an external, secondary app, and use such response to correctly render the view associated with the controller being tested.
require 'spec_helper'
include Helpers
feature "Exercise recordings_controller" do
include_context "shared admin context"
# A background is currently not used, because I have 3 scenario types... No xml
# results, 1 result, and 2 results. I will later DRY this out with a background,
# but the heavy lifting is over, for now.
scenario "show index page with 1 xml result" do
canned_xml_1 = File.open("spec/support/assets/canned_response_1.xml").read
stub_request(:post, "http://recorder.example.com:8080/recorder/index").
with(headers: {'Accept'=>'*/*', 'User-Agent'=>'Ruby'}).
to_return(status: 200, body: canned_xml_1, headers: {})
uri = URI.parse("http://recorder.example.com:8080/recorder/index")
visit recorded_calls_path
title.should == "Recorded Calls"
page.should have_content("Search Results")
page.should have_content("Inbound", "5551230000", "175", "December 24 2014", "12:36:24", "134")
end
end
Advice/Resources that helped
With B-Seven's suggestion to my original question (see revisions), I was initially stubbing localhost:3000. He said this was incorrect. After further research, I agree since stubbing with WebMock is typically reserved for outside http connections.
In comments after his answer, B-Seven listed articles to refer to. I will list the ones that helped me the most.
http://robots.thoughtbot.com/how-to-stub-external-services-in-tests
http://railscasts.com/episodes/275-how-i-test
https://github.com/bblimke/webmock
http://www.agileventures.org/articles/testing-with-rspec-stubs-mocks-factories-what-to-choose
It is very important to read the backtrace generated from an errors. What took me so long to figure out how to mock was mainly reading them incorrectly. As you can see from my question, I was making a :get stub request. A coworker pointed out that the backtrace suggested to use :post. That was the final piece to make my spec pass.
I decided not to input the configuration variables as my stub request, for it would result in long lines of code. Instead, this is why I needed to uncomment out those configurations in test.rb.
Why are you stubbing localhost? I think you want to
stub_request(:get, "http://#{Rails.application.config.recorder_server}:#{Rails.application.config.recorder_server_port}/recorder/index").

rails json response with gzip compression

I have an api written in rails which on each request responds with a JSON response.
The response could be huge, so i need to compress the JSON response using gzip.
Wondering how to do this in rails controller?
I have added the line
use Rack::Deflater
in config.ru
Should I also be changing something in the line which renders JSON?
render :json => response.to_json()
Also, how do i check if the response is in gzip format or not..??
I did a curl request from terminal, I see only the normal plain JSON.
My post Content Compression with Rack::Deflater describes a couple of ways to integrate Rack::Deflater. The easiest would be to just update config/application.rb with:
module YourApp
class Application < Rails::Application
config.middleware.use Rack::Deflater
end
end
and you'll automatically compress all controller responses with deflate / gzip if the client explicitly says they can handle it.
For the response to be in gzip format we don't have to change the render method call.
If the request has the header Accept-Encoding: gzip, Rails will automatically compress the JSON response using gzip.
If you don't want the user to send a request with preset header., you can add the header to the request manually in the controller before rendering the response:
request.env['HTTP_ACCEPT_ENCODING'] = 'gzip'
render :json => response.to_json()
You can query Curl by setting a custom header to get gzipped response
$ curl -H "Accept-Encoding: gzip, deflate" localhost:3000/posts.json > posts_json.gz
then, then decompress it to view the actual response json
$ gzip -d posts_json.gz
$ cat posts_json
If it doesn't work. post back with output of rake middlewares to help us troubleshoot further.
In some cases you can consider to write huge response into a file and gzip it:
res = {} # huge data hash
json = res.to_json
Zlib::GzipWriter.open('public/api/huge_data.json.gz') { |gz| gz.write json }
and update this file regularly
Consider not putting Rack middlewares in config.ru when using Rails
Rails has it's own middleware stack manager since Rails 2.
The correct way is:
# config/application.rb or config/environment.rb depends on your Rails version
config.middleware.use Rack::Deflater
Don't use #djcp's solution when using Rack::ETag
Short answer:
module MyApp
class Application < Rails::Application
config.middleware.insert_before Rack::ETag, Rack::Deflater
end
end
The order of Rack::Deflater and Rack::ETag matters because Rack::Deflater uses Zlib::GzipWriter to compress the response body and it would compress with a timestamp by default, which means the compressed response body would change every second even if the original response body is the same.
To reproduce this problem, run the following script:
require 'rack/etag'
require 'rack/deflater'
require 'rack/content_length'
#app = Rack::Builder.new do
use Rack::ETag
use Rack::Deflater
use Rack::ContentLength
run ->(*) { [200, {}, ['hello world']] }
end
def puts_etag
puts #app.call({ 'HTTP_ACCEPT_ENCODING' => 'gzip' })[1]['ETag']
end
puts_etag
sleep 1
puts_etag
One can simply swap the lines of Rack::ETag and Rack::Deflater and get the expected output.
Rails uses Rack::ETag by default and config.middleware.use is just appending. To insert Rack::Deflater before Rack::Etag, use config.middleware.insert_before instead.
🍻

Swagger-ui only sending OPTIONS not POST http method despite working API

I am using Swagger-UI to browse my own API, built with grape and automatically documented with grape-swagger.
I've googled and tried every suggestion I can find, but I cannot get POST to work. Here's my headers:
header "Access-Control-Allow-Origin", "*"
header "Access-Control-Allow-Methods", "POST, GET, OPTIONS, PUT, PATCH, DELETE"
header "Access-Control-Request-Method", "*"
header "Access-Control-Max-Age", "1728000"
header "Access-Control-Allow-Headers", "api_key, Content-Type"
I just threw in everything suggested. I've enabled all the HTTP methods in supportedSubmitMethods and I have tested the API using the POSTMAN Chrome extension and it works perfectly. Creates a user properly and returns the correct data.
However all I get with swagger post is the server reporting:
Started OPTIONS "/v1/users.json" for 127.0.0.1 at 2012-12-21 04:07:13 -0800
and swagger response looking like this:
Request URL
http://api.lvh.me:3000/v1/users.json
Response Body
Response Code
0
Response Headers
I have also tested the OPTIONS response with POSTMAN and it is below:
Allow →OPTIONS, GET, POST
Cache-Control →no-cache
Date →Fri, 21 Dec 2012 12:14:27 GMT
Server →Apache-Coyote/1.1
X-Request-Id →9215cba8da86824b97c6900fb6d97aec
X-Runtime →0.170000
X-UA-Compatible →IE=Edge
I had the same problem and just solved it, hope this helps somebody.
Swagger-UI accepts multiple parameters through POST only through a 'form' paramType, not 'body' paramType, referenced in this issue https://github.com/wordnik/swagger-ui/issues/72.
I used the branch :git => 'git://github.com/Digication/grape-swagger.git' changing 'post' request paramType to 'form'. Generated xml output for swagger_doc (probably at path/swagger_doc/api or similar) should look something like this:
<api>
<path>/api/v2/...</path>
<operations type="array">
...
<httpMethod>POST</httpMethod>
<parameters type="array">
<parameter>
<paramType>form</paramType>
...More
Not
<paramType>body</paramType>
...More
I used the grape-swagger-rails gem to automatically install swagger-ui on localhost (files can also be downloaded from the swagger-ui site), and everything works!!
Had the same problem. Fixed by adding CORS
add into Gemfile:
gem 'rack-cors', :require => 'rack/cors'
add into application.rb
config.middleware.use Rack::Cors do
allow do
origins '*'
# location of your API
resource '/*', :headers => :any, :methods => [:get, :post, :options, :put]
end
end
be sure that you've changed location of your API here.
Nice to hear you are using grape-swagger: I think it is awesome :)
I am not entirely sure you are having the same problem, but when testing locally from the browser it will try to check if the origin is the same as requested, so to make sure I do not get that error, I created a small middleware that will tell the browser we allow all origin.
I am using a rails process (created with the awesome rails-api gem), so I create a new file in lib/middleware/access_control_allow_all_origin.rb with the following content:
module Middleware
class AccessControlAllowAllOrigin
def initialize(app)
#app = app
end
def call(env)
status, headers, body = #app.call(env)
allow_all_origin!(headers)
[status, headers, body]
end
private
def allow_all_origin!(headers)
headers['Access-Control-Allow-Origin'] = '*'
headers['Access-Control-Request-Method'] = '*'
end
end
end
and at the bottom of my application.rb I just add the middleware as follows:
require 'middleware/access_control_allow_all_origin'
config.middleware.insert_after Rack::ETag, Middleware::AccessControlAllowAllOrigin
Hope this helps.
I do not know about the solution for ruby-on-rails as I am using Swagger with play framework 2.0.2.
I provided a domain name to it and changed the basePath to domain name in application.conf file as swagger.api.basepath="domain-name" and it worked.
You can change the basePath in api-docs to domain-name. I read about the api-docs on
api-docs.
And does your web server hijack headers? If you are using NGinx for example, your "OPTIONS" request might not send the appropriate values as a response, in some cases.
What is your OPTIONS request response? Can you dump it out here? I'll tell you if it can be that.

Can I use a Request / Reply - RPC pattern in Rails 3 with AMQP?

For reasons similar to the ones in this discussion, I'm experimenting with messaging in lieu of REST for a synchronous RPC call from one Rails 3 application to another. Both apps are running on thin.
The "server" application has a config/initializers/amqp.rb file based on the Request / Reply pattern in the rubyamqp.info documentation:
require "amqp"
EventMachine.next_tick do
connection = AMQP.connect ENV['CLOUDAMQP_URL'] || 'amqp://guest:guest#localhost'
channel = AMQP::Channel.new(connection)
requests_queue = channel.queue("amqpgem.examples.services.time", :exclusive => true, :auto_delete => true)
requests_queue.subscribe(:ack => true) do |metadata, payload|
puts "[requests] Got a request #{metadata.message_id}. Sending a reply..."
channel.default_exchange.publish(Time.now.to_s,
:routing_key => metadata.reply_to,
:correlation_id => metadata.message_id,
:mandatory => true)
metadata.ack
end
Signal.trap("INT") { connection.close { EventMachine.stop } }
end
In the 'client' application, I'd like to render the results of a synchronous call to the 'server' in a view. I realize this is a bit outside the comfort zone of an inherently asynchronous library like the amqp gem, but I'm wondering if there's a way to make it work. Here is my client config/initializers/amqp.rb:
require 'amqp'
EventMachine.next_tick do
AMQP.connection = AMQP.connect 'amqp://guest:guest#localhost'
Signal.trap("INT") { AMQP.connection.close { EventMachine.stop } }
end
Here is the controller:
require "amqp"
class WelcomeController < ApplicationController
def index
puts "[request] Sending a request..."
WelcomeController.channel.default_exchange.publish("get.time",
:routing_key => "amqpgem.examples.services.time",
:message_id => Kernel.rand(10101010).to_s,
:reply_to => WelcomeController.replies_queue.name)
WelcomeController.replies_queue.subscribe do |metadata, payload|
puts "[response] Response for #{metadata.correlation_id}: #{payload.inspect}"
#message = payload.inspect
end
end
def self.channel
#channel ||= AMQP::Channel.new(AMQP.connection)
end
def self.replies_queue
#replies_queue ||= channel.queue("reply", :exclusive => true, :auto_delete => true)
end
end
When I start both applications on different ports and visit the welcome#index view.
#message is nil in the view, since the result has not yet returned. The result arrives a few milliseconds after the view is rendered and is displayed on the console:
$ thin start
>> Using rack adapter
>> Thin web server (v1.5.0 codename Knife)
>> Maximum connections set to 1024
>> Listening on 0.0.0.0:3000, CTRL+C to stop
[request] Sending a request...
[response] Response for 3877031: "2012-11-27 22:04:28 -0600"
No surprise here: subscribe is clearly not meant for synchronous calls. What is surprising is that I can't find a synchronous alternative in the AMQP gem source code or in any documentation online. Is there an alternative to subscribe that will give me the RPC behavior I want? Given that there are other parts of the system in which I'd want to use legitimately asynchronous calls, the bunny gem didn't seem like the right tool for the job. Should I give it another look?
edit in response to Sam Stokes
Thanks to Sam for the pointer to throw :async / async.callback. I hadn't seen this technique before and this is exactly the kind of thing I was trying to learn with this experiment in the first place. send_response.finish is gone in Rails 3, but I was able to get his example to work for at least one request with a minor change:
render :text => #message
rendered_response = response.prepare!
Subsequent requests fail with !! Unexpected error while processing request: deadlock; recursive locking. This may have been what Sam was getting at with the comment about getting ActionController to allow concurrent requests, but the cited gist only works for Rails 2. Adding config.allow_concurrency = true in development.rb gets rid of this error in Rails 3, but leads to This queue already has default consumer. from AMQP.
I think this yak is sufficiently shaven. ;-)
While interesting, this is clearly overkill for simple RPC. Something like this Sinatra streaming example seems a more appropriate use case for client interaction with replies. Tenderlove also has a blog post about an upcoming way to stream events in Rails 4 that could work with AMQP.
As Sam points out in his discussion of the HTTP alternative, REST / HTTP makes perfect sense for the RPC portion of my system that involves two Rails apps. There are other parts of the system involving more classic asynchronous event publishing to Clojure apps. For these, the Rails app need only publish events in fire-and-forget fashion, so AMQP will work fine there using my original code without the reply queue.
You can get the behaviour you want - have the client make a simple HTTP request, to which your web app responds asynchronously - but you need more tricks. You need to use Thin's support for asynchronous responses:
require "amqp"
class WelcomeController < ApplicationController
def index
puts "[request] Sending a request..."
WelcomeController.channel.default_exchange.publish("get.time",
:routing_key => "amqpgem.examples.services.time",
:message_id => Kernel.rand(10101010).to_s,
:reply_to => WelcomeController.replies_queue.name)
WelcomeController.replies_queue.subscribe do |metadata, payload|
puts "[response] Response for #{metadata.correlation_id}: #{payload.inspect}"
#message = payload.inspect
# Trigger Rails response rendering now we have the message.
# Tested in Rails 2.3; may or may not work in Rails 3.x.
rendered_response = send_response.finish
# Pass the response to Thin and make it complete the request.
# env['async.callback'] expects a Rack-style response triple:
# [status, headers, body]
request.env['async.callback'].call(rendered_response)
end
# This unwinds the call stack, skipping the normal Rails response
# rendering, all the way back up to Thin, which catches it and
# interprets as "I'll give you the response later by calling
# env['async.callback']".
throw :async
end
def self.channel
#channel ||= AMQP::Channel.new(AMQP.connection)
end
def self.replies_queue
#replies_queue ||= channel.queue("reply", :exclusive => true, :auto_delete => true)
end
end
As far as the client is concerned, the result is indistinguishable from your web app blocking on a synchronous call before returning the response; but now your web app can process many such requests concurrently.
CAUTION!
Async Rails is an advanced technique; you need to know what you're doing. Some parts of Rails do not take kindly to having their call stack abruptly dismantled. The throw will bypass any Rack middlewares that don't know to catch and rethrow it (here is a rather old partial solution). ActiveSupport's development-mode class reloading will reload your app's classes after the throw, without waiting for the response, which can cause very confusing breakage if your callback refers to a class that has since been reloaded. You'll also need to ask ActionController nicely to allow concurrent requests.
Request/response
You're also going to need to match up requests and responses. As it stands, if Request 1 arrives, and then Request 2 arrives before Request 1 gets a response, then it's undefined which request would receive Response 1 (messages on a queue are distributed round-robin between the consumers subscribed to the queue).
You could do this by inspecting the correlation_id (which you'll have to explicitly set, by the way - RabbitMQ won't do it for you!) and re-enqueuing the message if it's not the response you were waiting for. My approach was to create a persistent Publisher object which would keep track of open requests, listen for all responses, and lookup the appropriate callback to invoke based on the correlation_id.
Alternative: just use HTTP
You're really solving two different (and tricky!) problems here: persuading Rails/thin to process requests asynchronously, and implementing request-response semantics on top of AMQP's publish-subscribe model. Given you said this is for calling between two Rails apps, why not just use HTTP, which already has the request-response semantics you need? That way you only have to solve the first problem. You can still get concurrent request processing if you use a non-blocking HTTP client library, such as em-http-request.

Resources