Rails: Convert REST API to websocket client - ruby-on-rails

I have a typical Rails REST Api written for a http consumers. However, it turns out they need web socket API because of the integration POS Machines.
The typical API looks like this;
class Api::Pos::V1::TransactionsController < ApplicationController
before_action :authenticate
def index
#transactions = #current_business.business_account.business_deposits.last(5)
render json: {
status: 200,
number: #transactions.count,
transactions: #transactions.as_json(only: [:created_at, :amount, :status, :client_card_number, :client_phone_number])
}
end
private
def request_params
params.permit(:account_number, :api_key)
end
def authenticate
render status: 401, json: {
status: 401,
error: "Authentication Failed."
} unless current_business
end
def current_business
account_number = request_params[:account_number].to_s
api_key = request_params[:api_key].to_s
if account_number and api_key
account = BusinessAccount.find_by(account_number: account_number)
if account && Business.find(account.business_id).business_api_key.token =~ /^(#{api_key})/
#current_business = account.business
else
false
end
end
end
end
How can i serve the same responses using web-sockets?
P.S: Never worked with sockets before
Thank you

ActionCable
I would second Dimitris's reference to ActionCable, as it's expected to become part of Rails 5 and should (hopefully) integrate with Rails quite well.
Since Dimitris suggested SSE, I would recommend against doing so.
SSE (Server Sent Events) use long polling and I would avoid this technology for many reasons which include the issue of SSE connection interruptions and extensibility (websockets allow you to add features that SSE won't support).
I am almost tempted to go into a rant about SSE implementation performance issues, but... even though websocket implementations should be more performant, many of them suffer from similar issues and the performance increase is often only in thanks to the websocket connection's longer lifetime...
Plezi
Plezi* is a real-time web application framework for Ruby. You can either use it on it's own (which is not relevant for you) or together with Rails.
With only minimal changes to your code, you should be able to use websockets to return results from your RESTful API. Plezi's Getting Started Guide has a section about unifying the backend's RESTful and Websocket API's. Implementing it in Rails should be similar.
Here's a bit of Demo code. You can put it in a file called plezi.rb and place it in your application's config/initializers folder...
Just make sure you're not using any specific Servers (thin, puma, etc'), allowing Plezi to override the server and use the Iodine server, and remember to add Plezi to your Gemfile.
class WebsocketDemo
# authenticate
def on_open
return close unless current_business
end
def on_message data
data = JSON.parse(data) rescue nil
return close unless data
case data['msg']
when /\Aget_transactions\z/i
# call the RESTful API method here, if it's accessible. OR:
transactions = #current_business.business_account.business_deposits.last(5)
write {
status: 200,
number: transactions.count,
# the next line has what I think is an design flaw, but I left it in
transactions: transactions.as_json(only: [:created_at, :amount, :status, :client_card_number, :client_phone_number])
# # Consider, instead, to avoid nesting JSON streams:
# transactions: transactions.select(:created_at, :amount, :status, :client_card_number, :client_phone_number)
}.to_json
end
end
# don't disclose inner methods to the router
protected
# better make the original method a class method, letting you reuse it.
def current_business
account_number = params[:account_number].to_s
api_key = params[:api_key].to_s
if account_number && api_key
account = BusinessAccount.find_by(account_number: account_number)
if account && Business.find(account.business_id).business_api_key.token =~ /^(#{api_key})/
return (#current_business = account.business)
end
false
end
end
end
Plezi.route '/(:api_key)/(:account_number)', WebsocketDemo
Now we have a route that looks something like: wss://my.server.com/app_key/account_number
This route can be used to send and receive data in JSON format.
To get the transaction list, the client side application can send:
JSON.stringify({msg: "get_transactions"})
This will result in data being send to the client's websocket.onmessage callback with the last five transactions.
Of course, this is just a short demo, but I think it's a reasonable proof of concept.
* I should point out that I'm biased, as I'm Plezi's author.
P.S.
I would consider moving the authentication into a websocket "authenticate" message, allowing the application key to be sent in a less conspicuous manner.
EDIT
These are answers to the questions in the comments.
Capistrano
I don't use Capistrano, so I'm not sure... but, I think it would work if you add the following line to your Capistrano tasks:
Iodine.protocol = false
This will prevent the server from auto-starting, so your Capistrano tasks flow without interruption.
For example, at the beginning of the config/deploy.rb you can add the line:
Iodine.protocol = false
# than the rest of the file, i.e.:
set :deploy_to, '/var/www/my_app_name'
#...
You should also edit your rakefile and add the same line at the beginning of the rakefile, so your rakefile includes the line:
Iodine.protocol = false
Let me know how this works. Like I said, I don't use Capistrano and I haven't tested it out.
Keeping Passenger using a second app
The Plezi documentation states that:
If you really feel attached to your thin, unicorn, puma or passanger server, you can still integrate Plezi with your existing application, but they won't be able to share the same process and you will need to utilize the Placebo API (a guide is coming soon).
But the guide isn't written yet...
There's some information in the GitHub Readme, but it will be removed after the guide is written.
Basically you include the Plezi application with the Redis URL inside your Rails application (remember to make sure to copy all the gems used in the gemfile). than you add this line:
Plezi.start_placebo
That should be it.
Plezi will ignore the Plezi.start_placebo command if there is no other server defined, so you can put the comment in a file shared with the Rails application as long as Plezi's gem file doesn't have a different server.
You can include some or all of the Rails application code inside the Plezi application. As long as Plezi (Iodine, actually) is the only server in the Plezi GEMFILE, it should work.
The applications will synchronize using Redis and you can use your Plezi code to broadcast websocket events inside your Rails application.

You may want to have a look at https://github.com/rails/actioncable which is the Rails way to deal with WebSockets, but currently in Alpha.
Judging from your code snippet, the client seems to only consume data from your backend. I'm skeptical whether you really need WebSockets. Ιf the client won't push data back to the server, Server Sent Events seem more appropriate.
See relevant walk-through and documentation.

Related

Ruby nsq how to listen for new messages

My setup is as follows: 1 Microservice that receives a request and writes a message in the queue(nsq) and a second microservice that must read the messages in the queue and do something based on them.
I am new to the nsq concept in ruby on rails. I have installed the nsq from here: http://nsq.io/overview/quick_start.html. I have also used this gem to facilitate pushing messages: https://github.com/wistia/nsq-ruby.
I have been able to queue the message. This part was easy enough.
Question:
How do I always listen in the 2nd microservice to figure out when something was pushed so I can consume it?
This is how I push messages:
require 'nsq'
class NsqService
attr_accessor :producer, :topic
def initialize(topic)
#topic = topic
#producer = producer
end
def publish(message)
producer.write(message)
end
private
def producer(nsqd='127.0.0.1:4150')
Nsq::Producer.new(
nsqd: nsqd,
topic: #topic
)
end
end
Example code on the nsq-ruby gem give the following code example:
require 'nsq'
consumer = Nsq::Consumer.new(
nsqlookupd: '127.0.0.1:4161',
topic: 'some-topic',
channel: 'some-channel'
)
# Pop a message off the queue
msg = consumer.pop
puts msg.body
msg.finish
# Close the connections
consumer.terminate
You could wrap this in a class for your service etc. You'll likely need to run some kind of middleware or separate processes to handle these connections. If there are parts of your rails code which need to interface with NSQ. While I have not used NSQ, I've used Sidekiq for background jobs and running async processes which has good instructions and examples of how to configure those middleware. For more suggestions and help, you might try contacting some of the maintainers of the ruby gem. I'm sure they can point you in the right direction.

Rails ActionCable API mode

I am going to create Rails API that handles WebSocket based on ActionCable (first of all, it is good idea to use ActionCable in API mode ?). ActionCable works well for full stack Rails application but I encountered difficulties with API. The first question is what kind of format should have all requests to actionCable server. All I've found so far it is subscribe action:
{
"command":"subscribe",
"identifier":"{\"channel\":\"SomeChannel\"}"
}
How about others? Is there any documentation where I can find that?
Thanks in advance
I would probably avoid using the ActionCable semantics and internal protocol for an API project that includes non-browser clients.
For example:
ActionCable's internal semantics / protocol might change between versions. Since your code will be tightly coupled with ActionCable's internal workings, it might be harder to upgrade.
ActionCable's internal semantics / protocol might or might not include everything you need, whereas writing your own Websocket messaging protocol (especially using JSON) is super easy and will offer you an exact fit.
This doesn't mean you need to completely move away from Rails. It should be easy enough to use your Rails models and code within a non-Rails Websocket alternative.
Also, Ruby has some nice Websocket alternatives for ActionCable.
I'm biased, being the author of both Iodine - an HTTP/Websocket server with native Pub/Sub and Plezi.io, a real-time web application framework... but I would probably use iodine (with or without the added comfort offered by Plezi).
A simple Websocket application with Plezi will look something like this (seriously, run the following code from the terminal using irb, it works):
require 'plezi'
class ChatServer
def index
"Use Websockets to connect."
end
def on_open
#name = params['id'] || "anonymmous"
subscribe channel: "chat"
publish channel: "chat", message: "#{#name} joind the chat."
write "Welcome, #{#name}!"
end
def on_close
publish channel: "chat", message: "#{#name} left the chat."
end
def on_message data
publish channel: "chat", message: "#{#name}: #{data}"
end
def on_shutdown
write "Server shutting down. Goodbye #{#name}"
end
end
Plezi.route '/', ChatServer
# We'll monitor message just for kicks:
subscription = Iodine.subscribe(pattern: "*") do |channel, message|
# print a log?
puts "\n* Message on channel #{channel}:\n#{message}\n"
end
# make sure we don't duplicate our monitoring on every process.
root_pid = Process.pid
Iodine.run { Iodine.unsubscribe(subscription) unless Process.pid == root_pid }
exit
No Redis server required, no special things to prepare and it's possible to use plezi as middleware within a Rails application (running the iodine server instead of puma).

What is right way to accept a lot of requests with Rails controller?

My app controller accepts requests from third party API (webhooks), but when it becomes 400 RPM my site goes down (too many clients). What can I do with it?
class CallbacksController < ApplicationController
def acceptor
if params['type'] == 'confirmation' # this type is rare. only when client switches on callback
group_setting = GroupSetting.find_by_callback_token(params[:callback_token])
if group_setting
group_setting.update_attribute(:use_callback, true)
GroupSetting.new.callback_start(group_setting.group, group_setting.user)
render text: group_setting.response_string
else
render text:'ok'
end
else
CallbackWorker.perform_async(params[:callback_token], params['type'],
params['group_id'], params['object'],
params['secret'])
render text:'ok'
end
end
end
It seems to me that you have a web server thread bottleneck. Could you specify which server are you using? Can you make an Apache Benchmark and post the results? Maybe a little more information on your setup could help.
If you are using WEBrick, I would advise trying with PUMA.
I would also suggest that you check out Passenger that integrates easily with NGINX or Unicorn, that can help you with load balancing your requests.

Using Rethinkdb's change feed in a webapp

How can I use Rethinkdb's change feed in a webapp, see http://www.rethinkdb.com/docs/changefeeds/ruby/ ? I currently use Ruby on Rails. I've tried Googling 'rethinkdb "change feed" rails' and 'rethinkdb "change feed" websocket'
I would like to display updates on a webpage to a RethinkDB table with lowest latency as possible.
RethinkDB is meant to be used from the server (from Rails) and not from the client. It's really important to understand this! If you have a listener on your data (a changefeed), then hose changes will get routed to your Rails app.
If you want to add to query RethinkDB from the front-end (from the browser), you might be interested in these two projects:
https://github.com/mikemintz/rethinkdb-websocket-client
https://github.com/mikemintz/rethinkdb-websocket-server
Once these changes are routed to your application, then you can do with them as you wish. If what you want to do is route those changes to the front-end to just show the users these changes, you can just send them through a web socket. Faye is a really good library to do this.
This is how this would look like. In your ruby code, you would add something like this:
# Add Faye
App = Faye::RackAdapter.new MessageApp, mount: "/faye"
# Changefeed listener
r.table("messages").changes.em_run(Conn) do |err, change|
App.get_client.publish('/message/new', change["new_val"])
end
Basically, whenever there's a change in the messages table, send the new value over the web socket. You can take a look at the full example (with front-end code) here:
https://github.com/thejsj/ruby-and-rethinkdb/
And here is the Ruby file:
https://github.com/thejsj/ruby-and-rethinkdb/blob/master/server/main.rb
RethinkDB seems to not support complex client authentication (auth token is shared amongst all clients), so you can't do that client-side from Javascript.
But you can create a pipeline: run websocket on your server, which will fetch records from RethinkDB and pass it to clients. Using em-websocket it will look something like this:
require 'em-websocket'
require 'rethinkdb'
include RethinkDB::Shortcuts
EventMachine.run do
#clients = []
#cursor = r.table("authors").changes.run
EM::WebSocket.start(:host => '0.0.0.0', :port => '3001') do |ws|
ws.onopen do |handshake|
#clients << ws
end
ws.onclose do
#clients.delete ws
end
#cursor.each do |document|
#clients.each{|ws| ws.send document}
end
end
end

Re-entrant subrequests in Rack/Rails

I've got a couple Engine plugins with metal endpoints that implement some extremely simple web services I intend to share across multiple applications. They work just fine as they are, but obviously, while loading them locally for development and testing, sending Net::HTTP a get_response message to ask localhost for another page from inside the currently executing controller object results in instant deadlock.
So my question is, does Rails' (or Rack's) routing system provide a way to safely consume a web service which may or may not be a part of the same app under the same server instance, or will I have to hack a special case together with render_to_string for those times when the hostname in the URI matches my own?
It doesn't work in development because it's only serving one request at a time, and the controller's request gets stuck. If you need this you can run multiple server locally behind a load balancer. I recommend using Passenger even for development (and the prefpane if you are on OS X).
My recommendation for you is to separate the internal web services and the applications that use them. This way you do not duplicate the code and you can easily scale and control them individually.
This is in fact possible. However, you need to ensure that the services you call are not calling each other recursively.
A really simple "reentrant" Rack middleware could work like this:
class Reentry < Struct.new(:app)
def call(env)
#current_env = env
app.call(env.merge('reentry' => self)
end
def call_rack(request_uri)
env_for_recursive_call = #current_env.dup
env_for_recursive_call['PATH_INFO'] = request_uri # ...and more
status, headers, response = call(env_for_recursive_call)
# for example, return response as a String
response.inject(''){|part, buf| part + buf }
end
end
Then in the calling code:
env['reentry'].call_rack('/my/api/get-json')
A very valid use case for this is sideloading API responses in JSON
format within your main page.
Obviously the success of this technique will depend on the sophistication
of your Rack stack (as some parts of the Rack env will not like being reused).

Resources