ruby interprocess communication - ruby-on-rails

I have a Rails project and two ruby mini-daemons running in the background. What's the best way to communicate between them?
Communication like below should be possible:
Rails -> Process 1 -> Process 2 -> Rails
Some requests would be sync, other async.
Queues (something like AMQ, or custom Redis based) or RPC HTTP calls?

Check DRb as well.

I implemented a system via RabbitMq + the bunny gem.
Update:
After reading http://blog.brightbox.co.uk/posts/queues-and-callbacks I decided to try out RabbitMQ. There are two gems amqp (async, eventmachine based) or bunny (sync). amqp is great, but if you're using Rails with passenger it can do some weird things.
The system works like this, the daemons listen on a queue for messages:
# The incoming data should be a JSON encoded hash that looks like:
# { "method" => method_to_call, "opts" => [ Array of opts for method ],
# "output" => "a queue where to send the result (optional)" }
# If output is specified it will publish the JSON encoded response there.
def listen_on(queue_name, class)
BUNNY.start
bunny = BUNNY.queue(queue_name)
bunny.subscribe do |msg|
msg = JSON.parse(msg[:payload])
result = class.new.send(msg["method"], *msg["opts"])
if msg["output"]
BUNNY.queue(msg["output"]).publish(result.to_json)
end
end
So once a message is received it calls a method from a class. One thing to note is that it would have been ideal to use bunny for Rails and amqp in the daemons. But I like to use one gem pe service.

Related

How do I transfer Data using Web Server/TCPsockets in Ruby?

I have a data scraper in ruby that retrieves article data.
Another dev on my team needs my scraper to spin up a webServer he can make a request to so that he may import the data on a Node Application he's built.
Being a junior, I do not understand the following :
a) Is there a proper convention in Rails that tells me where to place my scraper.rb file
b) Once that file is properly placed, how would i get the server to accept connections with the scrapedData
c)What (functionally) is the relationship between the ports, sockets, and routing
I understand this may be a "rookieQuestion" but I honestly dont know.
Can someone please BREAK THIS DOWN.
I have already:
i) Setup a server.rb file and have it linking to localhost:2000 but Im not sure how to create a proper route or connection that allows someone to use Postman for a valid route and connect to my data.
require 'socket'
require 'mechanize'
require 'awesome_print'
port = ENV.fetch("PORT",2000).to_i
server = TCPServer.new(port)
puts "Listening on port #{port}..."
puts "Current Time : #{Time.now}"
loop do
client = server.accept
client.puts "= Running Web Server ="
general_sites = [
"https://www.lovebscott.com/",
"https://bleacherreport.com/",
"https://balleralert.com/",
"https://peopleofcolorintech.com/",
"https://afrotech.com/",
"https://bossip.com/",
"https://www.itsonsitetv.com/",
"https://theshaderoom.com/",
"https://shadowandact.com/",
"https://hollywoodunlocked.com/",
"https://www.essence.com/",
"http://karencivil.com/",
"https://www.revolt.tv/"
]
holder=[]
agent = Mechanize.new
general_sites.each do |site|
page=agent.get(site);
newRet = page.search('a')
newRet.each do |e|
data = e.attr('href').to_s
if(data.length > 50)
holder.push(data)
end
end
pp holder.length.to_s + " [ posts total] ==> Now Scraping --> " + site
end
client.write(holder)
client.close
end
In Rails you don't spin up a web server manually, as it's done for you using rackup, unicorn, puma or any other compatible application server.
Rails itself is never "talking" to the HTTP clients directly, it is just a specific application that exposes a rack-compatible API (basically have an object that responds to call(hash) and returns [integer, hash, enumerable_of_strings]); the app server will get the data from unix/tcp sockets and call your application.
If you want to expose your scraper to an external consumer (provided it's fast enough), you can create a controller with a method that accepts some data, runs the scraper, and finally renders back the scraping results in some structured way. Then in the router you connect some URL to your controller method.
# config/routes.rb
post 'scrape/me', to: 'my_controller#scrape'
# app/controllers/my_controller.rb
class MyController < ApplicationController
def scrape
site = params[:site]
results = MyScraper.run(site)
render json: results
end
end
and then with a simple POST yourserver/scrape/me?site=www.example.com you will get back your data.

Rails 4: how to make request from outside of Rails app during integration test?

I have an application that needs 2-way communication to external daemon (bitcoind). There is functionality in bitcoind that allows to call my application whenever new block or transaction of interest occurs ('--walletnotify' and '--blocknotify'). For that I'm using CURL to request "http://myapp/walletnotify" and so on:
walletnotify = /usr/bin/curl --max-time 60 http://myapp/walletnotify/%s
I'm trying to create integration tests for this callback behavior. Unfortunately when running integration tests, I'm receiving errors on daemon side, as it is not able to perform requests to "http://myapp/walletnotify" - obviously Rails server cannot be reached (or the connection is interrupted?). Of course tests fail as appropriate actions are not called.
My question is: how to properly test such scenario? Is there any way to allow for direct external requests to application during integration tests? Is there a way to make sure that Rails server is running during integration tests? Or maybe I should listen to such requests inside integration test and then proxy them to application?
Update 2018-06-03: I'm using minitest. The test that I'm trying to run is here:
https://github.com/cryptogopher/token_voting/blob/master/test/integration/token_votes_notify_test.rb
After calling
#rpc.generate(101)
bitcoind daemon in regtest mode should generate 101 blocks and call 'blocknotify' callbacks. The problem is it cannot send HTTP request to application during test.
Ok, I resolved this one.
Looks like 'minitest' does not start application server no matter which driver you choose. Still you can start your own HTTP server to listen for notifications from external sources and forward them to tested application.
For this to work you need:
require 'webrick'
Setup HTTP server (logging disabled to avoid clutter):
server = WEBrick::HTTPServer.new(
Port: 3000,
Logger: WEBrick::Log.new("/dev/null"),
AccessLog: []
)
Specify how to handle incoming HTTP requests. In my case there will be only GET requests, but it is important to forward them with original headers:
server.mount_proc '/' do |req, resp|
headers = {}
req.header.each { |k,v| v.each { |a| headers[k] = a } }
resp = get req.path, {}, headers
end
Start HTTP server in separate thread (it is blocking the thread where it is running):
#t = Thread.new {
server.start
}
Minitest.after_run do
#t.kill
#t.join
end
Timeout.timeout(5) do
sleep 0.1 until server.status == :Running
end

Trigger/Subscribe to websocket-rails event from inside a rails runner

I have a Rails Application with websocket-rails gem.
Inside my application there is a Daemon that I launch with rails runner MyDaemon.start
I'm using websocket-rails Synchronization, so my config/initializers/websocket_rails.rb looks like this:
WebsocketRails.setup do |config|
config.log_internal_events = false
config.standalone = false
config.synchronize = true
end
Inside MyDaemon, using synchronization, I can trigger event that will reach both my WebsocketRails::BaseController and my javascript WebSocketRails.
What I'm trying to do is to find a way to bind to events from my MyDaemon.
I've tried to implement a plain WebSocket client using both faye-websocket-ruby and websocket-client-simple, but after banging my head on my keyboard for some time, I figured out that there is some kind of "handshake" process using connection_id from the client_connected message. Basically none of the solutions provided in this other so question works for me.
I need to understand if inside my MyDaemon I can subscribe directly to some WebsocketRails callback, even inside an EventMachine, or how should I implement a Websocket Client in Ruby itself.
My last attempt to have a ruby client can be found in this gist, and this is a sample output:
ruby client.rb ws://localhost:3000/websocket
[:open, {"upgrade"=>"websocket", "connection"=>"Upgrade", "sec-websocket-accept"=>"zNTdGvxFKJeP+1PyGf27T4x2PGo="}]
JSON message is
[["client_connected", {"id"=>nil, "channel"=>nil, "user_id"=>nil, "data"=>{"connection_id"=>"4b7b91001befb160d17b"}, "success"=>nil, "result"=>nil, "token"=>nil, "server_token"=>nil}]]
client id is 4b7b91001befb160d17b
[:message, "[[\"client_connected\",{\"id\":null,\"channel\":null,\"user_id\":null,\"data\":{\"connection_id\":\"4b7b91001befb160d17b\"},\"success\":null,\"result\":null,\"token\":null,\"server_token\":null}]]"]
JSON message is
[["websocket_rails.ping", {"id"=>nil, "channel"=>nil, "user_id"=>nil, "data"=>{}, "success"=>nil, "result"=>nil, "token"=>nil, "server_token"=>nil}]]
Sending ["pong",{}]
[:message, "[[\"websocket_rails.ping\",{\"id\":null,\"channel\":null,\"user_id\":null,\"data\":{},\"success\":null,\"result\":null,\"token\":null,\"server_token\":null}]]"]
[:close, 1006, ""]
While the log of websocket-rails is:
I [2015-06-27 02:08:45.250] [ConnectionManager] Connection opened: #<Connection::2b3dddaf3ec4ed5e3550>
I [2015-06-27 02:08:45.251] [Dispatcher] Started Event: client_connected
I [2015-06-27 02:08:45.251] [Dispatcher] Name: client_connected
I [2015-06-27 02:08:45.251] [Dispatcher] Data: {"connection_id"=>"2b3dddaf3ec4ed5e3550"}
I [2015-06-27 02:08:45.251] [Dispatcher] Connection: #<Connection::2b3dddaf3ec4ed5e3550>
I [2015-06-27 02:08:45.251] [Dispatcher] Event client_connected Finished in 0.000174623 seconds
I [2015-06-27 02:09:05.252] [ConnectionManager] Connection closed: #<Connection::2b3dddaf3ec4ed5e3550>
I [2015-06-27 02:09:05.252] [Dispatcher] Started Event: client_disconnected
I [2015-06-27 02:09:05.252] [Dispatcher] Name: client_disconnected
I [2015-06-27 02:09:05.252] [Dispatcher] Connection: #<Connection::2b3dddaf3ec4ed5e3550>
I [2015-06-27 02:09:05.253] [Dispatcher] Event client_disconnected Finished in 0.000236669 seconds
Probably I'm missing somethig very stupid, so I'm here to ask your help!
You can use Iodine as a websocket client (I'm the author):
require 'iodine/http'
# prevents the Iodine's server from running
Iodine.protocol = :timer
# starts Iodine while the script is still running
Iodine.force_start!
options = {}
options[:on_open] = Proc.new {puts 'Connection Open'; write "Hello World!" }
options[:on_close] = Proc.new {puts 'Connection Closed'}
options[:on_message] = Proc.new {|data| puts "I got: #{data}" }
# connect to an echo server for demo. Use the blocking method:
websocket = Iodine::Http::WebsocketClient.connect "wss://echo.websocket.org/", options
websocket << "sending data"
sleep 0.5
websocket.close
As an aside note, reading around I noticed that the websocket-rails gem isn't being updated all that much. See this question
As an alternative, you can run websockets inside your Rails app by using the Plezi framework (I'm the author).
It's quite easy to use both frameworks at the same time on the same server. This way you can use your Rails model's code inside your Plezi Websocket controller.
Because Plezi will manage the websockets and Rails will probably render the 404 Not Found page, Plezi's routes will take precedence... but as long as your routes don't override each other, you're golden.
Notice that to allow both apps to run together, Plezi will force you to use Iodine server as your Rack server. To avoid this you can use the Placebo API and run Plezi on a different process.
You can read more the framework's README file.

Server client to Browser client communication

This is more of a conceptual understanding gap rather than technical one. I am new to web socket\messaging api's.
I ran a chat application using faye ruby server and everything works fine between two browsers.I want to send a message from a stand alone ruby client to a browser client which is sending messages to same server. Is it possible to send a message from a client like the one below to a browser whose script is also given below ?
This is not related to the application I created, but I was trying to understand the use of WS client api. Or specifically put , can I send message from a server client to browser client ? I guess I am lacking the understanding of the word 'client' here.
I see the messages on the server console, but the browser doesn't get the message sent by the stand alone client.
Also I see this when i run the client :
Started GET "/faye/test123" for 127.0.0.1 at 2015-04-09 07:17:46 -0400
require 'faye'
require 'eventmachine'
EM.run {
ws = Faye::WebSocket::Client.new('ws://localhost:9292/faye/test123')
ws.onopen = lambda do |event|
p [:open, ws.headers]
ws.send('987654321')
end
ws.on :open do |event|
p [:open]
ws.send('123 123 123 123')
p [:sent]
end
}
Browser script :
window.client = new Faye.Client('http://localhost:9292/faye');
client.subscribe('/test123', function(payload){
if(payload.message)
{
console.log('I am in here 77777.......'+payload.message);
return $("#incomingText").append(payload.message);
}
}
Looking at your code I think it might be useful to highlight the difference between websockets and faye.
Faye is a framework that supports a number of transports, websockets being just one of them. It can also do long polling for example. One of the benefits of Faye is that it can select the right transport that both the client and server understand. It also implements a simple pub/sub protocol on top of that transport, giving you a nice API to build off of.
Doing a pure websocket implementation is totally doable, but if you're going to go with Faye it's probably a good idea to use Faye's publish/subscribe API and not muck with Faye's websockets directly.
To answer your specific question:
Is it possible to send a message from a client like the one below to a browser whose script is also given below ?
Yes, absolutely but I would suggest doing it with Faye::Client. Here's what your server side code might look like:
client = Faye::Client.new('http://localhost:9292/faye')
client.publish('/test123', 'message' => 'Hello world')
With much more info here:
http://faye.jcoglan.com/ruby/clients.html

Can I use a Request / Reply - RPC pattern in Rails 3 with AMQP?

For reasons similar to the ones in this discussion, I'm experimenting with messaging in lieu of REST for a synchronous RPC call from one Rails 3 application to another. Both apps are running on thin.
The "server" application has a config/initializers/amqp.rb file based on the Request / Reply pattern in the rubyamqp.info documentation:
require "amqp"
EventMachine.next_tick do
connection = AMQP.connect ENV['CLOUDAMQP_URL'] || 'amqp://guest:guest#localhost'
channel = AMQP::Channel.new(connection)
requests_queue = channel.queue("amqpgem.examples.services.time", :exclusive => true, :auto_delete => true)
requests_queue.subscribe(:ack => true) do |metadata, payload|
puts "[requests] Got a request #{metadata.message_id}. Sending a reply..."
channel.default_exchange.publish(Time.now.to_s,
:routing_key => metadata.reply_to,
:correlation_id => metadata.message_id,
:mandatory => true)
metadata.ack
end
Signal.trap("INT") { connection.close { EventMachine.stop } }
end
In the 'client' application, I'd like to render the results of a synchronous call to the 'server' in a view. I realize this is a bit outside the comfort zone of an inherently asynchronous library like the amqp gem, but I'm wondering if there's a way to make it work. Here is my client config/initializers/amqp.rb:
require 'amqp'
EventMachine.next_tick do
AMQP.connection = AMQP.connect 'amqp://guest:guest#localhost'
Signal.trap("INT") { AMQP.connection.close { EventMachine.stop } }
end
Here is the controller:
require "amqp"
class WelcomeController < ApplicationController
def index
puts "[request] Sending a request..."
WelcomeController.channel.default_exchange.publish("get.time",
:routing_key => "amqpgem.examples.services.time",
:message_id => Kernel.rand(10101010).to_s,
:reply_to => WelcomeController.replies_queue.name)
WelcomeController.replies_queue.subscribe do |metadata, payload|
puts "[response] Response for #{metadata.correlation_id}: #{payload.inspect}"
#message = payload.inspect
end
end
def self.channel
#channel ||= AMQP::Channel.new(AMQP.connection)
end
def self.replies_queue
#replies_queue ||= channel.queue("reply", :exclusive => true, :auto_delete => true)
end
end
When I start both applications on different ports and visit the welcome#index view.
#message is nil in the view, since the result has not yet returned. The result arrives a few milliseconds after the view is rendered and is displayed on the console:
$ thin start
>> Using rack adapter
>> Thin web server (v1.5.0 codename Knife)
>> Maximum connections set to 1024
>> Listening on 0.0.0.0:3000, CTRL+C to stop
[request] Sending a request...
[response] Response for 3877031: "2012-11-27 22:04:28 -0600"
No surprise here: subscribe is clearly not meant for synchronous calls. What is surprising is that I can't find a synchronous alternative in the AMQP gem source code or in any documentation online. Is there an alternative to subscribe that will give me the RPC behavior I want? Given that there are other parts of the system in which I'd want to use legitimately asynchronous calls, the bunny gem didn't seem like the right tool for the job. Should I give it another look?
edit in response to Sam Stokes
Thanks to Sam for the pointer to throw :async / async.callback. I hadn't seen this technique before and this is exactly the kind of thing I was trying to learn with this experiment in the first place. send_response.finish is gone in Rails 3, but I was able to get his example to work for at least one request with a minor change:
render :text => #message
rendered_response = response.prepare!
Subsequent requests fail with !! Unexpected error while processing request: deadlock; recursive locking. This may have been what Sam was getting at with the comment about getting ActionController to allow concurrent requests, but the cited gist only works for Rails 2. Adding config.allow_concurrency = true in development.rb gets rid of this error in Rails 3, but leads to This queue already has default consumer. from AMQP.
I think this yak is sufficiently shaven. ;-)
While interesting, this is clearly overkill for simple RPC. Something like this Sinatra streaming example seems a more appropriate use case for client interaction with replies. Tenderlove also has a blog post about an upcoming way to stream events in Rails 4 that could work with AMQP.
As Sam points out in his discussion of the HTTP alternative, REST / HTTP makes perfect sense for the RPC portion of my system that involves two Rails apps. There are other parts of the system involving more classic asynchronous event publishing to Clojure apps. For these, the Rails app need only publish events in fire-and-forget fashion, so AMQP will work fine there using my original code without the reply queue.
You can get the behaviour you want - have the client make a simple HTTP request, to which your web app responds asynchronously - but you need more tricks. You need to use Thin's support for asynchronous responses:
require "amqp"
class WelcomeController < ApplicationController
def index
puts "[request] Sending a request..."
WelcomeController.channel.default_exchange.publish("get.time",
:routing_key => "amqpgem.examples.services.time",
:message_id => Kernel.rand(10101010).to_s,
:reply_to => WelcomeController.replies_queue.name)
WelcomeController.replies_queue.subscribe do |metadata, payload|
puts "[response] Response for #{metadata.correlation_id}: #{payload.inspect}"
#message = payload.inspect
# Trigger Rails response rendering now we have the message.
# Tested in Rails 2.3; may or may not work in Rails 3.x.
rendered_response = send_response.finish
# Pass the response to Thin and make it complete the request.
# env['async.callback'] expects a Rack-style response triple:
# [status, headers, body]
request.env['async.callback'].call(rendered_response)
end
# This unwinds the call stack, skipping the normal Rails response
# rendering, all the way back up to Thin, which catches it and
# interprets as "I'll give you the response later by calling
# env['async.callback']".
throw :async
end
def self.channel
#channel ||= AMQP::Channel.new(AMQP.connection)
end
def self.replies_queue
#replies_queue ||= channel.queue("reply", :exclusive => true, :auto_delete => true)
end
end
As far as the client is concerned, the result is indistinguishable from your web app blocking on a synchronous call before returning the response; but now your web app can process many such requests concurrently.
CAUTION!
Async Rails is an advanced technique; you need to know what you're doing. Some parts of Rails do not take kindly to having their call stack abruptly dismantled. The throw will bypass any Rack middlewares that don't know to catch and rethrow it (here is a rather old partial solution). ActiveSupport's development-mode class reloading will reload your app's classes after the throw, without waiting for the response, which can cause very confusing breakage if your callback refers to a class that has since been reloaded. You'll also need to ask ActionController nicely to allow concurrent requests.
Request/response
You're also going to need to match up requests and responses. As it stands, if Request 1 arrives, and then Request 2 arrives before Request 1 gets a response, then it's undefined which request would receive Response 1 (messages on a queue are distributed round-robin between the consumers subscribed to the queue).
You could do this by inspecting the correlation_id (which you'll have to explicitly set, by the way - RabbitMQ won't do it for you!) and re-enqueuing the message if it's not the response you were waiting for. My approach was to create a persistent Publisher object which would keep track of open requests, listen for all responses, and lookup the appropriate callback to invoke based on the correlation_id.
Alternative: just use HTTP
You're really solving two different (and tricky!) problems here: persuading Rails/thin to process requests asynchronously, and implementing request-response semantics on top of AMQP's publish-subscribe model. Given you said this is for calling between two Rails apps, why not just use HTTP, which already has the request-response semantics you need? That way you only have to solve the first problem. You can still get concurrent request processing if you use a non-blocking HTTP client library, such as em-http-request.

Resources